Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lattice Gauge Theory

15 views
Skip to first unread message

John Baez

unread,
Jul 7, 2003, 9:13:49 PM7/7/03
to
Lately I've been working through some stuff about path
integrals in lattice gauge theory with Derek Wise, who is
a math grad student at UCR. Since I'm in Hong Kong this summer,
and we can't talk in person, we've agreed to keep up our
conversation on s.p.r.. In this post I'll jumpstart the
process by introducing the model we were talking about:
lattice QED with gauge group R.

QED is about quantized electromagnetic fields coupled to
charged matter, but we're starting out with a simple exactly
solvable theory where we leave out the matter.

We assume spacetime is a "2-complex", a thing that consists
of a set V of vertices, a set E of edges, and a set F of
polygonal faces, together with maps that say how these
things fit together. I don't want to get too technical
about this yet... but we may do so eventually. Of course
part of the ulterior motive is to get into spin foam models,
where these 2-complexes are important.

In any sort of lattice gauge theory there's a gauge group
G, and a "connection" A assigns to each edge e an element
of this group, say A(e). There's a notion of "curvature",
too: a thing F which assigns to each face f a group element
F(f). We get this from the connection by multiplying the
group elements A(e) as we go around all the edges of the face.
In general we need to know where to start as we go around
this tour, but if G is abelian it doesn't matter - and that's
the case we're considering first!

For now, we'll take G to be the real line, R. Later
we may look at what happens when G = U(1). There are
interesting differences between QED with gauge group R
and QED with gauge group U(1), at least in lattice gauge
theory.

Next, we define the "action" of any connection, S(A),
to be the sum over all faces of F(f)^2. This is a lattice
version of the usual Lagrangian for the electromagnetic
field. Of course we'd need a different formula if G
were not the real numbers, since then F(f) wouldn't be
just a number. But we can tackle that later - it's not
so hard.

Next, we say an observable O is any real-valued function
of the connection. We try to compute its vacuum expectation
value by doing a path integral like this:

int O(A) exp(-S(A)) dA
<O> = ----------------------
int exp(-S(A)) dA

We're integrating over the space of all connections, so
in our example here we are just integrating over R^E,
that is, a product of copies of the real line, one for
each edge in our 2-complex.

I say we *try* to do this computation, because we might
not succeed. We get into quite deep waters if our 2-complex
has infinitely many edges - then we need to integrate over
an infinite-dimensional space! This is typical in quantum
field theory, but we want to start with something easier,
so for now let's assume the sets V, E and F are finite.

Even in this case there are problems because it turns out
that usually

int exp(-S(A)) dA = infinity

This normalizing factor at the bottom of the formula for
computing expectation values is called the "partition function".
Derek and I had just gotten around to seeing that it's
often infinite and that the reason for this is *gauge-invariance*.

So I'll start out by asking Derek: how can we see this
integral comes out to equal infinity? We want to analyze
where the infinity is coming from so we can figure
out how to cure it. Then, when we cure the problem, we
can actually try to compute expectation values of some
observables.

We will probably need a little review of Gaussian integrals
at some point....

dennis westra

unread,
Jul 9, 2003, 7:24:13 PM7/9/03
to
I am quite interested in how things go in lattice qed, especially
since I am fond of some nice mathematics involved in physics. Mind if
i try to follow the both of you? By the way, isn't the book of David
Ruelle (Statistical Phyisics ,or something close) something that might
be of help in this field?

d/w

Robert C. Helling

unread,
Jul 9, 2003, 7:25:34 PM7/9/03
to
On Tue, 8 Jul 2003 01:13:49 +0000 (UTC), John Baez <ba...@galaxy.ucr.edu> wrote:
> Lately I've been working through some stuff about path
> integrals in lattice gauge theory with Derek Wise, who is
> a math grad student at UCR. Since I'm in Hong Kong this summer,
> and we can't talk in person, we've agreed to keep up our
> conversation on s.p.r.. In this post I'll jumpstart the
> process by introducing the model we were talking about:
> lattice QED with gauge group R.

As you are not doing this by personal mail I assume you want want
other people to participate in this discussion.

> QED is about quantized electromagnetic fields coupled to
> charged matter, but we're starting out with a simple exactly
> solvable theory where we leave out the matter.

OK. You are doing pure YM theory, abelian as you say later.

> Next, we say an observable O is any real-valued function
> of the connection.

You do not require an observable to be gauge invariant?

> Even in this case there are problems because it turns out
> that usually
>
> int exp(-S(A)) dA = infinity
>
> This normalizing factor at the bottom of the formula for
> computing expectation values is called the "partition function".
> Derek and I had just gotten around to seeing that it's
> often infinite and that the reason for this is *gauge-invariance*.
>
> So I'll start out by asking Derek: how can we see this
> integral comes out to equal infinity? We want to analyze
> where the infinity is coming from so we can figure
> out how to cure it. Then, when we cure the problem, we
> can actually try to compute expectation values of some
> observables.

Hint hint: You might want to start with 2-complexes with a single
vertex known as Eguchi-Kawai models. People have looked at the
existance of the integrals for non-ablian theories numerically

FINITE YANG-MILLS INTEGRALS.
By Werner Krauth (Ecole Normale Superieure), Matthias Staudacher
(Potsdam, Max Planck Inst.). AEI-063, Apr 1998. 7pp.
Published in Phys.Lett.B435:350-355,1998
e-Print Archive: hep-th/9804199

and later also analytically.

The first step is of course to cancle the volumes of the gauge group
between nominator and denominator by making a change of variables such
that the action only depends on some of the variables and then having
the same (infinite) integral upstairs and downstairs. Lorentz gauge
(requireing the divergence of the connection which again lives at the
vertices to vanish) might be a good start but there could be
obstructions related to the existence of harmonic one forms on your
2-complex.

Just my $.02
Robert

--
.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oO
Robert C. Helling Department of Applied Mathematics and Theoretical Physics
University of Cambridge
print "Just another Phone: +44/1223/766870
stupid .sig\n"; http://www.aei-potsdam.mpg.de/~helling

Derek Wise

unread,
Jul 9, 2003, 7:34:28 PM7/9/03
to
John Baez wrote:

> I'll start out by asking Derek: how can we see this
> integral comes out to equal infinity? We want to
analyze
> where the infinity is coming from so we can figure
> out how to cure it.

Okay, in what follows, I'll describe some of the
things I've figured out about this business. I'll
repeat a little of what we talked about before you
flew to Hong Kong, John, both because I want to make
sure I have it clear myself, and for the benefit of
others who might be interested.

As John mentioned, our model for spacetime is a finite
2-complex: Roughly, this consists of finite sets V, E,
and F of Vertices, Edges connecting vertices, and
Faces whose boundaries consist of one or more edges.
All of the edges and faces need to have some
orientation, which may be fixed in any convenient way.
Here's a small example of such a model spacetime,
with 6 vertices, 7 edges, and two faces:

e4 e7
*--->---*---<----*
| | |
| | |
e1^ f1 ^e3 f2 ^e6
| | |
| | |
*--->---*--->----*
e2 e5

where we take the orientation of the faces to be
"counter-clockwise."

Since we are doing electrodynamics with gauge group R
instead of U(1), our gauge field is an R-connection,
A, which assigns to each edge a real number:

A:E-->R

It's convenient to write A1:=A(e1), A2:=A(e2), etc.,
and then we can, given a connection, label our
spacetime directly by the group elements assigned to
its edges:

A4 A7
*--->---*---<----*
| | |
| | |
A1^ ^A3 ^A6
| | |
| | |
*--->---*--->----*
A2 A5

Similarly, the field strength, or curvature, assigns a
group element to each face:
F:F-->R

(hmmm... we're using F twice here. Perhaps we should
change the set of faces to the set of "plaquettes" and
call it P?)

Now, John said in his post that to get the curvature
of a face we "multiply" the group elements A(e) as we
go around the face. What he *really* meant, though,
is that we use the *group* operation as we go
around,which in the case of R is *addition*. So, to
find the curvature, we go around the boundary of the
face, according to its ccw orientation, and add up the
group elements. In doing this we have to pay
attention to orientation: if the orientation of an
edge coincides with that of the face, we just add the
group element; if the orientations are opposite, we
must and its negative. So, for example, in our model
above,

F1:= F(f1) = -A1 + A2 + A3 - A4
F2:= F(f2) = -A3 + A5 + A6 + A7.

Finally, we are ready to compute the action of this
particular connection
on our model spacetime. It is:

S(A) = F1^2 + F2^2.

Now, the integral John was referring to above is this
one:

/
| exp(-S(A)) dA
/

taken over the space of connections, R^E. The
trouble, he mentioned, is that this is usually
infinity. In fact, I think it is infinity whenever
the model includes at least one face that has two or
more edges as its boundary! The reason is gauge
invariance.

What form does gauge invariance take in this theory?
It is clear that to change the connection without
affecting the curvature we need only ensure that the
sum of A(e) around each face is unchanged. For
example, we can make the following change:

A4+t A7+3t
*--->---*---<----*
| | |
| | |
A1+t^ ^A3+t ^A6-t (t any real number)
| | |
| | |
*--->---*--->----*
A2+t A5-t

and the curvature, hence also the action, stays the
same. This is because the connection

t 3t
*--->---*---<----*
| | |
| | |
t ^ ^t ^ -t
| | |
| | |
*--->---*--->----*
t -t

is "flat:" it has no curvature anywhere, even though
the connection is nontrivial. We can add any such
flat connection to a given connection and the field
strength stays the same.

So, we have discovered one degree of gauge freedom.
In fact, there are 4 more. I claim that there is one
degree of gauge freedom for every edge in a maximal
simply connected subgraph of (V,E). (Does this sound
right to you, John?)

Now this makes it easy to understand why the integral
in question is almost always infinity. Our 5 degrees
of gauge freedom constitute a 5 dimensional subspace
of R^7 (our space of connections). So, if we start at
the trivial connection, 0, there are 5 orthogonal
directions we can go in and keep S(A)=0. I.e. there's
a 5 dimensional subspace on which exp(S(A))=1. So, if
we integrate exp(S(A)) over R^E, we have to get
infinity, since R is not compact.

So it seems gauge fixing might really be necessary in
this noncompact case. Can we not just calculate the
expectation values by integrating over a subspace
orthogonal to the one given by gauge freedom?

DeReK

=====
--------------------------
PS: If you want to send me spam, just use the address in the header!
Otherwise, to send me email, just send it to my first name at the domain math.ucr.edu.

__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com

Eric A. Forgy

unread,
Jul 10, 2003, 5:26:27 AM7/10/03
to
Hi,

Thank you for opening up your discussion like this on s.p.r. I hope
that if I ask some naive questions here and there, I won't distract
from the subject too much.

ba...@galaxy.ucr.edu (John Baez) wrote:

[snip]

> In any sort of lattice gauge theory there's a gauge group
> G, and a "connection" A assigns to each edge e an element
> of this group, say A(e). There's a notion of "curvature",
> too: a thing F which assigns to each face f a group element
> F(f). We get this from the connection by multiplying the
> group elements A(e) as we go around all the edges of the face.
> In general we need to know where to start as we go around
> this tour, but if G is abelian it doesn't matter - and that's
> the case we're considering first!

[snip]

> Next, we define the "action" of any connection, S(A),
> to be the sum over all faces of F(f)^2. This is a lattice
> version of the usual Lagrangian for the electromagnetic
> field. Of course we'd need a different formula if G
> were not the real numbers, since then F(f) wouldn't be
> just a number. But we can tackle that later - it's not
> so hard.

My first naive question...

The action

S(A) = sum_f F(f)^2

looks a lot like a discrete version of

S(A) = int_M (F,F) vol

for some smooth manifold M. If you broke this up into little n-cells
C_i then you can write this as

S(A) = sum_i [int_{C_i} (F,F) vol].

If each of the cells C_i were actually little cubes, then I think that
after some manipulations this will reduce to something like

S(A) = sum_{f in C_i} F(f)^2 |C_i|

where |C_i| is the volume of C_i, which we could take to be 1 and then
we get something like

S(A) = sum_f F(f)^2.

(I basically wrote all that out in an attempt to describe the way I'm
thinking about this, which may be completely off. Please correct me if
I'm wrong.)

So my question is, if you are letting your action be (as you said)

S(A) = sum_f F(f)^2

then are you assuming your 2-complex is actually a 2-cube complex?

For a more general 2-complex, e.g. a simplicial complex, then I would
almost expect to see something like

S(A) = sum_{f1 in F} sum_{f2 in F} g(f1,f2) F(f1) F(f2),

where g(f1,f2) is some coefficient involving dot product of the
normals of f1,f2 (or maybe the edges).

Is there something about QED that makes this issue unimportant? In
CED, i.e. classical theory, I think it is important. Would you need to
modify your action if your 2-complex were simplicial? Dual to
simplicial? I seem to remember that spin networks were defined on the
dual of a simplicial 2-complex.

Best regards,
Eric

John Baez

unread,
Jul 10, 2003, 8:33:09 AM7/10/03
to
In article <bei8q4$1up$1...@lfa222122.richmond.edu>,
Derek Wise <dere...@yahoo.com> wrote:

>I'll repeat a little of what we talked about before you
>flew to Hong Kong, John, both because I want to make
>sure I have it clear myself, and for the benefit of
>others who might be interested.

Great! Thanks for taking the time to fill in a bunch
more details and - even better - draw some examples!
It'll be really good to have those examples around as
we proceed.

One minor complaint: for some reason your article
didn't have a "References" line linking it back to mine.
You can see this by viewing your article on Google and
clicking on "View This Article Only" and then "Original Format".
If there's a way to correct this, that'd be nice.

(Luckily, Google somehow guessed that your article was
part of the same thread despite the lack of this "References"
line. But newsreader programs and other newsgroup archives
would be happier if that line were there. I'll include
the message-ID of my original article and also yours
on the "References" line of *this* post, and that should
link everything up.)

Ahem! On to some PHYSICS...

>As John mentioned, our model for spacetime is a finite

>2-complex [...]

>Here's a small example of such a model spacetime,
>with 6 vertices, 7 edges, and two faces:

e4 e7
*--->---*---<----*
| | |
| | |
e1^ f1 ^e3 f2 ^e6
| | |
| | |
*--->---*--->----*
e2 e5

>where we take the orientation of the faces to be
>"counter-clockwise."

Okay, good. Pretty soon we'll want the vertices
to have names too, so let me name them now:


e4 e7
v4-->---v5--<---v6


| | |
| | |
e1^ f1 ^e3 f2 ^e6
| | |
| | |

v1--->--v1--->--v3
e2 e5


>Since we are doing electrodynamics with gauge group R
>instead of U(1), our gauge field is an R-connection,
>A, which assigns to each edge a real number:
>
> A:E-->R

Right...

>It's convenient to write A1:=A(e1), A2:=A(e2), etc.,
>and then we can, given a connection, label our
>spacetime directly by the group elements assigned to
>its edges:

A4 A7
*--->---*---<----*
| | |
| | |
A1^ ^A3 ^A6
| | |
| | |
*--->---*--->----*
A2 A5

Okay! I'm not saying anything interesting yet;
I just want to burn the notation very thoroughly
into everyone's brain...

>Similarly, the field strength, or curvature, assigns a
>group element to each face:
> F:F-->R
>
>(hmmm... we're using F twice here. Perhaps we should
>change the set of faces to the set of "plaquettes" and
>call it P?)

Yes, let's do that! This sort of notational collision
always happens whenever you combine ideas from two subjects.
It's a real bummer, since physicists will throw a fit
if you called a U(1) curvature anything but "F" - god knows why -
while mathematicians will call it a sacrilege against Euler
if you write his famous formula as anything other than V-E+F.
And we will probably run into that formula sometime in this
discussion, at least if we get far enough!

But, since this is a physics newsgroup, and the lattice
gauge theorists like the word "plaquette", let's call those
faces "plaquettes", or at least call the set of them "P".

So, curvature is a map

F: P --> R

>Now, John said in his post that to get the curvature
>of a face we "multiply" the group elements A(e) as we
>go around the face. What he *really* meant, though,
>is that we use the *group* operation as we go

>around, which in the case of R is *addition*.

Yes: when you're a bigshot like me you can get away
with using "multiply" to mean "add", and nobody will
even criticize you for it.

So we get:

> F1:= F(f1) = -A1 + A2 + A3 - A4
> F2:= F(f2) = -A3 + A5 + A6 + A7.

>Finally, we are ready to compute the action of this
>particular connection
>on our model spacetime. It is:
>
> S(A) = F1^2 + F2^2.

Right. This is really some quadratic form on the 8d
vector space R^E, and you've shown it's *degenerate*,
and that causes the infinity in this integral:

/
| exp(-S(A)) dA
/

And you've shown it's degenerate because we can do
*gauge transformation* that change the A's but not the F's.

I have to run now... I'll continue replying to your
post later. But if you want to think about something,
you can think about precisely what gauge transformations
*are* in this model. We really want to think about the
set of them... the group of them, actually.

Hint: this is why I think it's a good idea to label the
vertices, too.

Another hint: everything works in perfect analogy with
electromagnetism in the continuum.

Yet another hint: to make physicists happy, we should
call a gauge transformation "phi", in this example.


Eric A. Forgy

unread,
Jul 10, 2003, 5:30:32 PM7/10/03
to
A Note on Notation:
-------------------

Again, thanks for bringing this discussion out in the open. I am
definitely enthusiastic about following along. I've always wanted to
see a connection (no pun intended) made between LFT and spin
networks/foams.

Right off the bat, I see a clash of notation and I think it is
worthwhile to straighten out the notation before trudging ahead too
far. I have a suggestion that I hope might be agreeable to all.

In particular, we have a gauge group G, a connection A, and its
respective curvature F. We also have the set of vertices V, edges E,
and faces F. It was stated that the curvature F was a map

F:F->G

*clash!!*

However, I don't think the clash is with the notation, I think the
clash is with the domain of F. The curvature F is not a map from the
"set of faces F" to the gauge group G. Rather, it is a map from a
formal linear combination (over Z) of (oriented) faces to the gauge
group. There is already a good notation for a formal linear
combination of faces, i.e. 2-chains, C_2.

Therefore, instead of referring to the set of faces as "P", which is
bound to cause confusion and probably clash with something else later,
I suggest that the curvature F be considered as a map

F:C_2->G.

Then, the connection is a map

A:C_1->G,

where C_1 is the space of 1-chains.

In my experience (and I am ALWAYS bringing together material from
vastly different fields, e.g. pure maths, theoretical physics, and
electrical engineering :)), artifically changing notation, e.g. F->P,
turns around and bites you later on. It is better to try to conform to
the standards as much as possible and only deviate if there is truly
no alternative.

Just a suggestion.

Eric

Squark

unread,
Jul 10, 2003, 5:31:34 PM7/10/03
to
Derek Wise <dere...@yahoo.com> wrote in message news:<bei8q4$1up$1...@lfa222122.richmond.edu>...

> What form does gauge invariance take in this theory?
> It is clear that to change the connection without
> affecting the curvature we need only ensure that the
> sum of A(e) around each face is unchanged. For
> example, we can make the following change:
>
> A4+t A7+3t
> *--->---*---<----*
> | | |
> | | |
> A1+t^ ^A3+t ^A6-t (t any real number)
> | | |
> | | |
> *--->---*--->----*
> A2+t A5-t
>
> and the curvature, hence also the action, stays the
> same. This is because the connection
>
> t 3t
> *--->---*---<----*
> | | |
> | | |
> t ^ ^t ^ -t
> | | |
> | | |
> *--->---*--->----*
> t -t
>
> is "flat:" it has no curvature anywhere, even though
> the connection is nontrivial. We can add any such
> flat connection to a given connection and the field
> strength stays the same.

A more explicit way to approach this would be to say a gauge
transformation assigns a real number to each vertex (g: V -> R)
and then we have t(e) = g(end(t)) - g(beginning(t)).

Best regards,
Squark

------------------------------------------------------------------

Write to me using the following e-mail:
Skvark_N...@excite.exe
(just spell the particle name correctly and change the
extension in the obvious way)

Jason

unread,
Jul 13, 2003, 2:11:37 AM7/13/03
to
It's kind of intuitive that in the continuum limit of a lattice theory
with a regular or quasiregular or randomly regular lattice, the
resulting theory would be translationally invariant. However, this
wouldn't be true in general for a regular lattice (or even quasiregular
lattice like a Penrose tiling) to have an isotropic continuum limit even
though it might be invariant under let's say a dihedral group or a
platonic solid group, for example. This certainly is true for most solid
crystals which are sufficiently annealed so that their domains spread
across the entire crystal. However, I notice in most lattice
approximations, a lattice like a square or a cubic lattice is used. Is
there any guarentee then, that in the continuum limit, the resulting
theory would be isotropic?

Eric A. Forgy

unread,
Jul 14, 2003, 2:56:34 PM7/14/03
to sci-physic...@moderators.isc.org

ba...@galaxy.ucr.edu (John Baez) wrote:
> Derek Wise <dere...@yahoo.com> wrote:

[snip]

> Okay, good. Pretty soon we'll want the vertices
> to have names too, so let me name them now:

(correcting the typo, i.e. duplicate v1)

> e4 e7
> v4-->---v5--<---v6
> | | |
> | | |
> e1^ f1 ^e3 f2 ^e6
> | | |
> | | |

> v1--->--v2--->--v3
> e2 e5

[snip]

> > A:E-->R

A:C_1-->R

:)

> >It's convenient to write A1:=A(e1), A2:=A(e2), etc.,

It may be convenient to keep track of sub/superscripts, e.g. e_1 is
the edge 1 and e^1 is the 1-cochain dual to the edge e_1, i.e.

e^i(e_j) = delta^i_j,

so that we can write

A = A_1 e^1 + A_2 e^2 + ...

Then we can even take advantage of Einstein summation convention to
write

A = A_u e^u.

> >and then we can, given a connection, label our
> >spacetime directly by the group elements assigned to
> >its edges:

(adding the subscripts)

> A_4 A_7
> *--->---*---<----*
> | | |
> | | |
> A_1^ ^A_3 ^A_6
> | | |
> | | |
> *--->---*--->----*
> A_2 A_5

[snip]

> So, curvature is a map
>
> F: P --> R

F:C_2-->R

*duck*

[snip]

> Right. This is really some quadratic form on the 8d
> vector space R^E, and you've shown it's *degenerate*,
> and that causes the infinity in this integral:
>
> /
> | exp(-S(A)) dA
> /
>
> And you've shown it's degenerate because we can do
> *gauge transformation* that change the A's but not the F's.
>
> I have to run now... I'll continue replying to your
> post later. But if you want to think about something,
> you can think about precisely what gauge transformations
> *are* in this model. We really want to think about the
> set of them... the group of them, actually.
>
> Hint: this is why I think it's a good idea to label the
> vertices, too.

Since we have introduced the vertices v_i, we can also introduce the
0-cochains v^i dual to the vertices, i.e.

v^i(v_j) = delta^i_j.

Then an arbitrary 0-cochain phi can be expressed as

phi = phi_u v^u.

The good thing about doing things this way is that now we can
introduce the coboundary map d:C_p-->C_{p+1}. Our sample 2-complex is
simple enough that we can go ahead and write it out explicitly for
each basis element:

dv^1 = -e^1-e^2
dv^2 = +e^2-e^3-e^5
dv^3 = +e^5-e^6
dv^4 = +e^1-e^4
dv^5 = +e^3+e^4+e^7
dv^6 = +e^6-e^7

The rule is simple, you simply include all edges incident on the
vertex in question with sign corresponding to whether it is directed
away (-) or toward (+) the vertex.

We can go one step further and write the explicit expressions for the
1-cochains

de^1 = -f^1
de^2 = +f^1
de^3 = +f^1-f^2
de^4 = -f1
de^5 = +f^2
de^6 = +f^2
de^7 = +f^2

where f^1 and f^2 are the 2-cochains dual to the faces f_1 and f_2,
i.e.

f^i(f_j) = delta^i_j.

Again, the rule for computing de^i is simple. Simply include all faces
incident on the edge with the proper sign inheritted from the face.

Of course

df^1 = df^2 = 0.

I hope that doesn't look too convoluted because the payoff for that
effort is near :)

If we begin with our connection

A = A_u e^u,

then we arrive at the curvature F via application of the coboundary
map d

F
= dA
= A_u de^u.

Note that unlike in the continuum, the coboundary map passes right
over the coefficients A_u.

The coboundary map is nilpotent (d^2 = 0), which follows from "the
boundary of a boundary is zero", which is equally valid on a lattice
of discrete vertices, edges, and faces, as it is on smooth manifolds.

Therefore, if we begin with a connection A and add an exact 1-cochain
dphi to it for some 0-cochain phi, i.e.

A' = A + dphi

then these will both produce the same curvatures

F = dA = dA'.

For our sample 2-complex, we can compute dphi explicitly as

dphi =
+(phi_4 - phi_1) e^1
+(phi_2 - phi_1) e^2
+(phi_5 - phi_2) e^3
+(phi_5 - phi_4) e^4
+(phi_3 - phi_2) e^5
+(phi_6 - phi_3) e^6
+(phi_5 - phi_6) e^7,

which is really just the oriented differences of the values at each
node bounding an edge assigned to that edge.

Since there are 6 nodes, you have 6 degrees of freedom (DoF) to define
phi, but since dphi is oblivious to a constant offset, i.e. adding a
constant to each node does not change the value of dphi, then we
really have only 5 DoF to specify dphi. This is another way to view
the 5-dimensional subspace of R^7 in our sample 2-complex.

> Another hint: everything works in perfect analogy with
> electromagnetism in the continuum.

Yum :)

Best regards,
Eric

PS: I smell cohomology...

Gerard Westendorp

unread,
Jul 14, 2003, 4:14:25 PM7/14/03
to sci-physic...@moderators.isc.org

Derek Wise wrote:

[..]

> As John mentioned, our model for spacetime is a finite
> 2-complex: Roughly, this consists of finite sets V, E,
> and F of Vertices, Edges connecting vertices, and
> Faces whose boundaries consist of one or more edges.


Ah, a generalization of a "circuit". I have been looking
for the right terminology for this for some time.

[..]


> Finally, we are ready to compute the action of this
> particular connection
> on our model spacetime. It is:
>
> S(A) = F1^2 + F2^2.


I don't quite understand this action. It seems that
A is analogous to the vector potential, and F is analogous
to the magnetic field. But the electromagnetic field
normally involves also electric fields, and the action
is something like B^2-E^2, which would be analogous to
F^2 - (dA/dt)^2.
Indeed, there is no time in your model so far.

[..]


> What form does gauge invariance take in this theory?
> It is clear that to change the connection without
> affecting the curvature we need only ensure that the
> sum of A(e) around each face is unchanged. For
> example, we can make the following change:
>
> A4+t A7+3t
> *--->---*---<----*
> | | |
> | | |
> A1+t^ ^A3+t ^A6-t (t any real number)
> | | |
> | | |
> *--->---*--->----*
> A2+t A5-t

You could make up a "potential" (phi) at each vertex
such that t_ij= phi_i-phi_j. All conservative "t-fields"
are gradients of a potential. So the amount of freedom here
is equal to the amount of vertices. You could subtract 1
form this because you can define one vertex to be at
ground potential. So then the dimension of the gauge
degrees of freedom is V-1, or in the example, equal to 5.

[..]


> So, we have discovered one degree of gauge freedom.
> In fact, there are 4 more. I claim that there is one
> degree of gauge freedom for every edge in a maximal
> simply connected subgraph of (V,E). (Does this sound
> right to you, John?)


Is that the same as V-1? A triangle has 3 vertices and
3 edges. So according to V-1 it has 2 gdof's. But the
number of edges is 3, so that would be in conflict.

[..]


> Now this makes it easy to understand why the integral
> in question is almost always infinity.


I never understood path integrals. You sum over all
paths, but not really. Some paths are excluded. Sometimes
the endpoints are fixed, and sometimes not, but then the
paths must keep the Hamiltonian fixed. The rules for this
seem obscure to me.

Gerard


Phil

unread,
Jul 16, 2003, 11:51:03 AM7/16/03
to
John Baez wrote...

>Next, we define the "action" of any connection, S(A),
>to be the sum over all faces of F(f)^2. This is a lattice
>version of the usual Lagrangian for the electromagnetic
>field. Of course we'd need a different formula if G
>were not the real numbers, since then F(f) wouldn't be
>just a number. But we can tackle that later - it's not
>so hard.

>Next, we say an observable O is any real-valued function
>of the connection. We try to compute its vacuum expectation
>value by doing a path integral like this:

> int O(A) exp(-S(A)) dA
><O> = ----------------------
> int exp(-S(A)) dA


Coming back to the problems with these integrals, since they are
infinite you might think of trying some regulation process such as
integrating each gauge variable from -R to R and letting R go to
infinity.


The integrals are then finite for given R (actually I am assuming that
the sets V,E and F are finite although you did not say that). The
variables can be rescaled by substitution A = Ra and the integrals are
over the variables a from -1 to 1. The action is quadratic so an R^2
factor comes out of the action.

int exp(-S(A)) dA = int exp(-R^2 S(a)) R^n da

S(a) is a positive definite matrix form with zero eigenvalues
corresponding to the gauge degrees of freedom. It is not so bad that the
integral diverges like R^n because this can cancel with the other
integral, but the R^2 factor in the action means that only the zero
eigenvalues will contribute as R-> infinity. Result: a flat connection.

To get more interesting results I suggest a U(1) gauge field instead.
Use the traditional Wilson loop action around faces. Then the action is
finite from the beginning.

Alternatively you could renormalise the R^2 factors away by introducing
some constants to absorb them but the result is not so different and you
can do more interesting things with the Wilson action.

John Baez

unread,
Jul 20, 2003, 1:49:37 AM7/20/03
to
In article <bf1l2m$b8t$1$8302...@news.demon.co.uk>,
Phil <ph...@weburbia.com> wrote:

>Coming back to the problems with these integrals, since they are
>infinite you might think of trying some regulation process such as
>integrating each gauge variable from -R to R and letting R go to
>infinity.

That's one option, but we will go a different route,
namely gauge-fixing, since this involves some interesting
ideas - especially cohomology.


Derek Wise

unread,
Jul 20, 2003, 2:45:35 AM7/20/03
to sci-physic...@moderators.isc.org

Gerard Westendorp <wes...@xs4all.nl> wrote in message news:<3F124C54...@xs4all.nl>...

> I don't quite understand this action. It seems that
> A is analogous to the vector potential,

Right.

> and F is analogous
> to the magnetic field. But the electromagnetic field

> normally involves also electric fields, ...

Not quite. The F really is the *electromagnetic* field.
It's a rank-2 tensor that includes all the components of
the electric and magnetic fields. If you aren't familiar
with how this stuff all works in the usual continuum
case, I suggest the first part of _Gauge Fields, Knots,
and Gravity_ by Baez and Muniain. (Really, it's good! --
I'm not just trying to butter up my advisor by
recommending his book!)

> Indeed, there is no time in your model so far.

Our "2-complex" really is supposed to be *spacetime*, and
not just space. Honest! :)

> You could make up a "potential" (phi) at each vertex
> such that t_ij= phi_i-phi_j.

Yep, I think that's just what we should do. (See some of
the parallel posts in this thread)

I wrote:
> > I claim that there is one
> > degree of gauge freedom for every edge in a maximal
> > simply connected subgraph of (V,E).

and Gerard Westendorp replied:

> Is that the same as V-1? A triangle has 3 vertices and
> 3 edges. So according to V-1 it has 2 gdof's. But the
> number of edges is 3, so that would be in conflict.

The triange isn't simply connected: it has a loop. To
get rid of the loop, we have to remove one edge, leaving
only 2. So, both methods yield the same result.

There is one exception to your rule, and that is if the
graph (V,E) is not connected. Some people might consider
this a silly exception to worry about. But since I'm a math
grad student, it's my responsibility to be more pedantic than
the average joe. Still, your formula can be patched up to
work even when there is more than one connected component of
spacetime. Your formula is:

Gauge dof = [# of vertices] - 1

but what is the "1"? It's really the number of connected
components, so more generally,

Gauge dof = [# of vertices] - [# of connected components].

This is then equivalent to my method of computing the dof.

DeReK

Jason

unread,
Jul 20, 2003, 3:02:36 PM7/20/03
to
I know you're working on R gauge theory here, instead of U(1) gauge
theory. But suppose we have a U(1) lattice gauge theory. Now, A:E->R
would have to be replaced with U:E->U(1) (using Wilson lines instead of
vector potentials) and F:F->R (they're not the same F!) would have to be
replaced by U:F->U(1) (Wilson loop around a face). The contribution to
the action from each face (assuming this is a regular lattice so that
all the faces are isometric) would now be s(U_f) where s(e^(i
theta))=-beta cos theta (or beta(1-cos theta), it doesn't matter either
way because it just contributes a constant factor to the partition
function) where beta is a constant parameter. One thing to note now is
that this no longer describes a free Abelian Yang-Mills field but a U(1)
theory with nonrenormalizable terms. Still, if beta>>1, then the
irrelavant nonrenormalizable terms (in the sense of the renormalization
group) would run to zero and the coupling strength, g would be
proportional to beta^(-1/2).

But let's consider the case where beta<<1 (strong coupling). Let's define
f_n(beta)=1/(2pi) \int^2pi_0 exp(-i n theta)exp(beta cos theta) dtheta

For sufficiently small beta, f_n(beta) is given to leading order in beta
by (beta/2)^|n|/|n|! where |n| is the absolute value of n.

It turns out just like the partition function of a quantum scalar field
can be written as the sum of Feynman diagrams, the partition function of
this U(1) gauge theory can be written as a sum over spin foams. Every
face of this lattice is assigned an integer such that for any given
directed edge, if all its neighboring faces are oriented so that each
edge has the same direction as the boundary of those faces, then the sum
of the assignments of these faces add up to zero. If the orientation of
a face is reversed, then its integer assignment is simply the negative
of its initial assignment. The weight of a spin foam is given by the
product of f_n(beta) over each face where n is the integer assigned to
that face.

Back to the example of a quantum scalar field. We're used to claiming
that a classical scalar field is the classical version of that theory
and the functional integral to get the partition function is the sum
over all possible classical states, weighted by exp(iS) in spacetime and
exp(-S) in Euclidean space. However, as far as the partition function is
concerned, we could equally have stated the classical version is the
Feynman diagram itself, where now, topologically equivalent Feynman
diagrams with different spacetime paths are now considered distinct. The
action for a Feynman diagram in this case would now be the logarithm of
its contribution to the partition function. This, I think, is the way
Feynman presented QED in his book for laymen. Of course, there would now
be a problem with (locally) extremizing the action because while we
could take the functional derivative over topologically equivalent
Feynman diagrams, we can't do so in general over inequivalent diagrams
(or come to think of it, maybe we could continuously transform any two
topologically distinct Feynman diagrams with the same external legs if
we allow contracting loops away and creating loops in the same manner).

But here's the interesting point; we can do the same thing for spin
foams. If we exclude all those faces which are assigned zero from the
spin foam configuration itself, we can assign an action to each spin
foam which is simply the sum of -ln[f_n(beta)/f_0(beta)] for each face
with a nonzero assignment. For sufficiently small beta, this is just
approximately |n|ln(2/beta)+ln(|n|!).

Now, think of how one might go about putting a closed oriented
Nambu-Goto string theory on a lattice. By oriented, I simply mean we
exclude unorientable worldsheets like the Klein bottle. The closed
strings are allowed to branch and merge. Because of the finite
resolution of the lattice, it might be possible for there to be more
than one worldsheet (or the same worldsheet passing through more than
once) passing through any given face. Suppose we assume any two strings
which share a common segment but with opposite orientation "cancel" each
other along that segment. Then, we might implement this by assigning an
integer to each oriented face giving the sum of the number of
worldsheets with positive orientation minus the number of worldsheets
with negative orientation passing through that given face. The fact that
the strings are closed can be implemented by insisting that for any
given directed edge, if all its neighboring faces are oriented so that
that edge has the same direction as the boundary of those faces, then
the sum of the assignments of these faces add up to zero. If T is the
string tension (normalized to the lattice), then we might expect the
contribution to the action from the lattice version of the Nambu-Goto
action would be given by |n|T. There are two difficulties, however.
First of all, the area of a worldsheet isn't given by the sum of the
number of faces it occupies because of the orientation problem of the
lattice. For example, there are 10 edges in the following diagram

|_
|_
|_
|_
|_

not 5sqrt(2). This objection might be answered by noticing that the
dominant contribution to the partition function comes from fractal like
worldsheets, so this approximation wouldn't be too far off for
sufficiently small lattice spacings. Alternatively, we could simply
replace the hypercubic lattice with a random lattice. The other
difficulty is the combinatorial factor to include when more than one
worldsheet coincide at a given edge. This combinatorial factor ought to
contribute something like -ln(|n|!) or something of that sort (I didn't
take orientation into account for my guess). Now, if we identify T with
ln(2/beta), the lattice Nambu-Goto action for closed oriented strings
looks rather similar to the action for a spin foam except for the sign
in front of ln(|n|!). I don't know what to make of it except to maybe
speculate this could be a reflection of the "Bose-Einstein" statistic of
a string.

Anyway, my point is that a strongly coupled U(1) theory (with
nonrenormalizable terms) gives a nearly identical partition function as
a Nambu-Goto theory of closed oriented strings!

Now, let's take a U(1) gauge field coupled to a S_1 nonlinear sigma
field. This S_1 is isomorphic to U(1). It turns out, going through the
same steps as above, that this gives a nearly identical partition
function as a Nambu-Goto theory of oriented open strings with "massive"
ends (i.e. the worldline of the ends contribute to the action).

Isn't this interesting? Actually, considering string theory was
originally invented to describe strong coupling QCD, maybe this isn't
such a coincidence after all.

Eric A. Forgy

unread,
Jul 20, 2003, 3:12:20 PM7/20/03
to
Hi,

If I could perform some kind of Jedi mind-influencing trick across the
distance, I would try to get you to consider non-cubic complexes :) In
the spirit of Rudin and doing calculus "right", I've spent the last 7
years trying to do lattice field theory "right." [not that the time
I've spent makes me any more of an authority than I was 7 years ago
because I'm still clueless! :)] Only, so far, I have confined myself
to the classical theory, but I think getting that done right is the
first step before you should even attempt the quantum theory. It is ok
to start out with cube complexes, but if you choose to do that, I wish
you would at least do things with it in mind to eventually generalize
to other more general complexes, i.e. simplicial. I think this is
especially warranted considering that (I think!) you will eventually
want to try to relate this stuff to spin networks/foams.

My primary concern at the moment is in the way you have defined your
action

S(A) = sum_{f_i} (F_i)^2.

As I said before, this is "ok" for a cubic complex, but definitely not
for anything else. However, I wouldn't have been so concerned about it
until you said:

dere...@yahoo.com (Derek Wise) wrote:


> Gerard Westendorp <wes...@xs4all.nl> wrote:
> >
> > Indeed, there is no time in your model so far.
>
> Our "2-complex" really is supposed to be *spacetime*, and
> not just space. Honest! :)

If one of your grid axes is aligned with "time" and the other grid
axes are all orthogonal to time, then you can classify your edges into
two categories that I call "space" edges and "time" edges. A time edge
lies on the time axis and a space edge is an edge orthogonal to time,
i.e. laying in space :) Next, you can classify all higher degree cells
into "purely spatial cells" or "space-time cells." A purely spatial
cell contains no time edges and a space-time cell contains space edges
and time edges.

Come to think of it, a time-aligned complex is such a special case, it
is worth defining a special term for it. I suggest that if our complex
has n spatial dimensions and one time axis that is aligned with time
edges, then we should call this an (n+1)-complex, in accordance with
typical physics lingo for spacetime. For example, if our 2-complex is
intended to be space-time, then we should call it a (1+1)-complex.

For a (1+1)-complex, we only have space-time facets, but for an
(n+1)-complex for n > 1, which precludes our example complex, we will
have purely spatial facets in addition to space-time facets. In this
case, let {ss_i} denote the set of purely spatial facets and {st_i}
denote the set of space-time facets, then something special happens
and we can write

B_i := F(ss_i)

and

E_i := F(st_i).

In other words, if we evaluate F on a purely spatial facet, we call it
B and if we evaluate F on a space-time facet, we call it E. If this
seems artificial, it should! It highlights how artificial the
splitting of F into E and B are and that they are really just
different components of the same geometrical object F. The splitting
is obviously coordinate, i.e. reference frame, dependent.

For an (n+1)-complex, as Gerard pointed out, the action should really
be something like

S(A) = [sum_{ss_i} (B_i)^2] - [sum_{st_i} (E_i)^2]

This suggest that for an (n+1)-complex, at the very least, we should
be considering an action

S(A) = sum_{f_i} +/-(F_i)^2,

where + is taken if f_i is purely spatial and - is taken if f_i is a
space-time facet (or something like that).

But then this begins to beg the question, "Why the +/-?" The action
involves more than just F. It involves the metric and/or the Hodge
star and possibly a wedge product. Defining a meaningful metric and/or
Hodge star on a simplicial complex is what has stumped me for the past
7 years! :)

A cubic complex is such an amazingly simplifying beast that you can
almost do ANYTHING and get "some" results no matter what you do. In my
opinion, the test of whether or not what you are doing is "right" is
to see how it works on more general complexes.

Eric

*whisper* "You will do simplicial complexes. You WILL do simplicial
complexes..."

Jason

unread,
Jul 22, 2003, 7:21:51 PM7/22/03
to sci-physic...@moderators.isc.org

Derek Wise wrote:

> Now, the integral John was referring to above is this
> one:
>
> /
> | exp(-S(A)) dA
> /
>
> taken over the space of connections, R^E. The
> trouble, he mentioned, is that this is usually
> infinity. In fact, I think it is infinity whenever
> the model includes at least one face that has two or
> more edges as its boundary! The reason is gauge
> invariance.

Actually, if the lattice is connected, the space of gauge
transformations is |V|-1 dimensional where |V| is the cardinality of V.
Also, gauge transformations (modulo the global gauge transformation) act
freely upon the state space of all possible A assignments to the edges
(i.e. the orbits are isomorphic to the space of gauge transformations
(modulo the global gauge transformation) itself). This is why the
partition function diverges.

> So it seems gauge fixing might really be necessary in
> this noncompact case. Can we not just calculate the
> expectation values by integrating over a subspace
> orthogonal to the one given by gauge freedom?

But the same thing is true for continuum funtional integrals. There, a
trick known as Fadeev-Popov gauge fixing was used. The exact same trick
can be used for an R lattice gauge theory. The observables, of course,
would now have to be gauge invariant.


Derek Wise

unread,
Jul 22, 2003, 7:23:01 PM7/22/03
to sci-physic...@moderators.isc.org

First, some notes on notation. Eric Forgy is quite correct to point
out that the curvature in this theory isn't just a map from the set of
faces to G -- it extends to linear combinations of faces. Similarly,
a connection is a map from formal linear combinations of edges to the
gauge group. We were being lazy, but I think his notation really does
make things work out better. So here's the notation I'll start using
(at least for now):

V,E,P = sets of Vertices, Edges, and Plaquettes (or Faces)
C_0 = formal linear combinations elements of V
C_1 = " " E
C_2 = " "

Note I've decided to use P for the set of faces, partly since I think
the notational conflict may still arise in other ways (e.g. the image
F(F)), but also because "plaquette" really does seem standard in the
physics literature. (Of course, some physics papers call the edges
"links," but we won't follow that convention -- especially since gauge
theory in general is rather "tied up" with knot theory...)

Another notational issue:

I wrote:
> > >It's convenient to write A1:=A(e1), A2:=A(e2), etc.,

and Eric A. Forgy replied:


> It may be convenient to keep track of sub/superscripts, e.g. e_1 is
> the edge 1 and e^1 is the 1-cochain dual to the edge e_1, i.e.

Yeah, I think you are right. However, in this case I'll probably
continue to be lazy for as long as I can get away with it. I hate
typing underscores. At least when I'm not explicitly talking about
duals, I'll probably just write e1, A1, etc. when I really mean e_1,
A_1...

Okay, now lets get on to something of more substance!

We've got our tiny little sample spacetime, which looks like this:



e4 e7
v4-->---v5--<---v6
| | |
| | |
e1^ f1 ^e3 f2 ^e6
| | |
| | |
v1--->--v2--->--v3
e2 e5

Let's start out with a scalar field, a "0-form:"

phi:C_0 --> R.

this gives us:

d(phi):C_1 --> R
e |---> phi(t(e)) - phi(s(e))

Here I am borrowing some of the language of category theory, without
being specific yet at all about how we categorify our 2-complex. I
guess I might as well let the secret out that we ultimately want to
talk about categorifying all the stuff we're talking about in this
thread. (Actually, this is no real secret to anybody who knows JB --
his goal is to categorify the universe!). The t(e) and s(e) above
stand for the source and target of the edge e (which we will think of
as a 1-morphism). For example t(e1)= v4, s(e1)=v1.

So, d(phi) is an example of a connection. It's a flat connection
(more on this in a minute). In general, a connection is a "1-form":

A:C_1 --> R,

while the curvature is a "2-form:"

F:C_2 --> R.

Now, in regular continuum electromagnetism, we get F from A by
applying the differential d and we'd like to do the same thing here.
Eric Forgy worked out one nice way to think about this. I haven't
thought much yet about how what he wrote relates to what I'm about to
say. I'm sure it's equivalent. I'll probably have some comments
about this soon, Eric.

To get F=dA, what we would really like to say is:

dA:C_2 --> R
f |---> A(t(f)) - A(s(t)).

in direct analogy with how we defined d(phi). The problem here is
that, as we have defined faces so far, they have neither sources nor
targets. Really we should be thinking of faces as 2-morphisms between
1-morphisms (linear combinations of edges). For example, in our model
spacetime, draw f1 as a double arrow "====>" from the lower left of
the face (near v1) to the upper left (near v5). I'd do this, but it's
beyond my ASCII-Art patience to figure out how. What this means is
that f1 is a 2-arrow from (-e2+e1) to (e3-e4). Note the orientation
-- there's a "right-hand rule" involved here, which is equivalent to
my earlier choice of orienting all the faces ccw. With this way of
looking at faces, the s & t in the above now make sense.

Finally, we can use all of our definitions to show that d(phi) really
is a "flat" connection, i.e. we can show that d(phi) is closed, as we
would expect. Actually, I'll just work out an example, since the
proof of the general case is not really harder. Consider our face f1,
now a 2-arrow oriented in the way I described above. We have:


F(dphi)(f1) = d(dphi)(f1)
= dphi(t(f1)) - dphi(s(f1))
= dphi(e3-e4) - dphi(-e2+e1)
= dphi(e3) - dphi(e4) + dphi(e2) - dphi(e1)
= [phi(v5)-phi(v2)] - [phi(v5)-phi(v4)]
+ [phi(v2)-phi(v1)] - [phi(v4)-phi(v1)]
= 0.

Here's the picture again, for reference in working through this
calculation:

e4 e7
v4-->---v5--<---v6
| | |
| | |

e1^ f1 ^e3 f2 ^e6 .

| | |
| | |
v1--->--v2--->--v3
e2 e5


There's probably more to be said about this stuff, but I'm getting a
bit worn out, so it'll have to wait. In particular, I'd like to work
out what the Maxwell equation *d*F=J says in this context (and for
that matter, what Hodge duality is exactly here).

John Baez wrote:
> > I have to run now... I'll continue replying to your
> > post later. But if you want to think about something,
> > you can think about precisely what gauge transformations
> > *are* in this model. We really want to think about the
> > set of them... the group of them, actually.

Making a gauge transformation means adding d(phi) to the connection,
for any scalar potential phi, just as in continuum electromagnetism.
The group of gauge transformations is naively just R^V, but of course,
d wipes out any (local) constant in phi, so I guess the group is
really R^V/R^n = R^(V-n) where n is the number of connected components
of spacetime. (Incidentally, John, isn't the fact that a scalar
potential is only determined up to a constant really a primitive
example of a "modification" -- a "gauge transformation between gauge
transformations"? Oooooo -- more n-categories. Wheee!)

Going one step further, I've got the feeling that we're really
interested in "connections modulo gauge transformations" (or
"connections modulo (gauge transformations modulo modifications)").
That, I guess, would be R^E/R^(V-n)=
R^(E-V+n). In our example spacetime with 6 vertices, 7 edges, I get
R^(7-6+1) =R^2

John Baez wrote:
> mathematicians will call it a sacrilege against Euler
> if you write his famous formula as anything other than V-E+F.
> And we will probably run into that formula sometime in this
> discussion, at least if we get far enough!

Hmmm... if we went one step further, we would have
R^P/R^(E-V+n)=R^(P-E+V-n), so we get Euler's "famous formula" in the
exponent (sacrilege included!). Does this have anything to do with
how you expected it to show up? In our example, I get
R^(2-7+6-1)=R^0, the trivial group. I don't really know what I'm
doing here. I'm just playing with numbers.

Gotta run...

DeReK

PS: To send me spam, just use the address in the header.
Otherwise, send me mail by using my first name at the domain
math.ucr.edu.

Phil

unread,
Jul 24, 2003, 5:16:51 PM7/24/03
to sci-physic...@moderators.isc.org

"Eric A. Forgy" <fo...@uiuc.edu> wrote in message
news:3fa8470f.03072...@posting.google.com...


> But then this begins to beg the question, "Why the +/-?" The action
> involves more than just F. It involves the metric and/or the Hodge
> star and possibly a wedge product. Defining a meaningful metric and/or
> Hodge star on a simplicial complex is what has stumped me for the past
> 7 years! :)

That does sound like a key point and if it has had you stumped for so long
I certainly am not going to know the answer.

Of course you can have geometry on a simplicial lattice using Regge calculus
but I suppose you mean that you cant find a gauge invariant action.

Usually spin half objects sit on vertices, spin one on edges etc., but the
metric
is spin two, so perhaps it will require vairables defined on at least 3
dimensional
simplices. Have you thought about that?

In case gravity is too difficult a problem for this thread could there be a
simple gauge
action analogous to one of those topological field theories that does not
have a
metric?

Matthew Nobes

unread,
Jul 24, 2003, 5:32:31 PM7/24/03
to sci-physic...@moderators.isc.org

Eric A. Forgy <fo...@uiuc.edu> wrote:

> If I could perform some kind of Jedi mind-influencing trick across the
> distance, I would try to get you to consider non-cubic complexes :)

Have you looked at the work of David Kaplan, regarding supersymmetry on the
lattice? While the overall structure is still cubic he does put (super)fields
on the diagonals.

--
Matthew Nobes
c/o Physics Dept. Simon Fraser University, 8888 University
Drive Burnaby, B.C., Canada
http://www.sfu.ca/~manobes

John Baez

unread,
Jul 25, 2003, 9:22:58 AM7/25/03
to
In article <bf7c47$o2i$1...@news.udel.edu>, Jason <pri...@excite.com> wrote:

>I know you're working on R gauge theory here, instead of U(1) gauge
>theory.

We'll do the U(1) version as soon as we finish the R version.
Please wait.

>But suppose we have a U(1) lattice gauge theory.

Oh, darn! Now you'll probably give everything away!

I wanted Derek to figure it all out himself -
with gentle nudges here and there, when needed.

>Now, A:E->R
>would have to be replaced with U:E->U(1) (using Wilson lines instead of
>vector potentials) and F:F->R (they're not the same F!) would have to be
>replaced by U:F->U(1) (Wilson loop around a face).

Right. Luckily Derek already knows this - in conversations
before I left for the summer, we talked about this general
formalism for an arbitrary Lie group playing the role of
gauge group.

>The contribution to
>the action from each face (assuming this is a regular lattice so that
>all the faces are isometric) would now be s(U_f) where s(e^(i
>theta))=-beta cos theta (or beta(1-cos theta), it doesn't matter either
>way because it just contributes a constant factor to the partition
>function) where beta is a constant parameter.

Hmm, we didn't talk about the action for the U(1) theory yet.

Derek: DON'T READ THIS!

>One thing to note now is
>that this no longer describes a free Abelian Yang-Mills field but a U(1)
>theory with nonrenormalizable terms. Still, if beta>>1, then the

>irrelevant nonrenormalizable terms (in the sense of the renormalization

>group) would run to zero and the coupling strength, g would be
>proportional to beta^(-1/2).

Hmm, even I didn't know *that*. I guess that means I shouldn't
read it... but since it's my job to play the all-knowing wizard,
I'll peek and then pretend I knew it all along.

I've been fascinated by claims that lattice U(1) electromagnetism
exhibits confinement (in certain dimensions, with certain values
of the coupling constant). Is this the theory they're talking
about when people make these claims? Can you sketch for us what
happens in different dimensions in the strong and weak coupling
regimes? You're not mentioning dimensions here, so I don't know
if you're working with a 4d lattice or a lattice of arbitrary
dimension. (I'm going to start Derek off on 2d lattice gauge
theories, because they're exactly soluble and very pretty.)

Another question: if you change the formula for the action
a bit, can you get a theory that DOES describe a free abelian
Yang-Mills theory? I have my own favorite formula for the
action, which is very nice in 2 dimensions, and I wonder if that
would give substantially different results from yours.

>It turns out just like the partition function of a quantum scalar field
>can be written as the sum of Feynman diagrams, the partition function of
>this U(1) gauge theory can be written as a sum over spin foams.

That's actually very general fact about lattice gauge theory,
and it's one of the secret reasons I'm explaining this stuff
to Derek. He'll think he's doing lattice gauge theory and
then - WHAMMO! - before he knows it, he'll be doing spin foams!

[fancy stuff I hope to understand someday deleted]

>Anyway, my point is that a strongly coupled U(1) theory (with
>nonrenormalizable terms) gives a nearly identical partition function as
>a Nambu-Goto theory of closed oriented strings!

Cool!!!

>Isn't this interesting?

Yeah! I wish I really understood it. Maybe I'll use my wizardly
powers to force Derek to learn about it and explain it to me! Is
there some place I, or he, can read about it?

>Actually, considering string theory was
>originally invented to describe strong coupling QCD, maybe this isn't
>such a coincidence after all.

Right!

Anyway, in this thread Derek and I are going to approach these
interesting issues in a very slow and careful way, so if you want
to jump in an explain this stuff over again after we've worked
a bit with U(1) electromagnetism, that would be very nice.


John Baez

unread,
Jul 25, 2003, 10:12:00 AM7/25/03
to
In article <3fa8470f.03070...@posting.google.com>,
Eric Forgy wrote:

>Thank you for opening up your discussion like this on s.p.r. I hope
>that if I ask some naive questions here and there, I won't distract
>from the subject too much.

Sure, it'll probably help. Only I know what you're
going to say... something about how you really really
want a theory of differential forms - including the

*HODGE STAR OPERATOR*

- on a lattice! This whole discussion is veering dangerously
close to this old obsession of yours... there's no way you'll
be able to resist bringing it up.

Let's see:

> ba...@galaxy.ucr.edu (John Baez) wrote:

>> Next, we define the "action" of any connection, S(A),
>> to be the sum over all faces of F(f)^2. This is a lattice
>> version of the usual Lagrangian for the electromagnetic
>> field. Of course we'd need a different formula if G
>> were not the real numbers, since then F(f) wouldn't be
>> just a number. But we can tackle that later - it's not
>> so hard.

> My first naive question...

See, folks? See how he's playing innocent?
"My first naive question", he says...
I can already tell he's up to something!

>The action
>
>S(A) = sum_f F(f)^2
>
>looks a lot like a discrete version of
>
>S(A) = int_M (F,F) vol
>
>for some smooth manifold M.

Yes, it's a kind of caricature of the usual
action for the electromagnetic field. However,
now we are working in the Euclidean signature,
not Lorentzian, so F(f)^2 is nonnegative, unlike
the Lorentzian version of the inner product (F,F).
It's a bit more like E^2 + B^2 than E^2 - B^2.

>If you broke this up into little n-cells
>C_i then you can write this as
>
>S(A) = sum_i [int_{C_i} (F,F) vol].
>
>If each of the cells C_i were actually little cubes, then I think that
>after some manipulations this will reduce to something like
>
>S(A) = sum_{f in C_i} F(f)^2 |C_i|

Right! Exactly!

>where |C_i| is the volume of C_i, which we could take to be 1

Yeah, we're setting everything we can equal to 1 in this
discussion, to keep things simple.

>and then we get something like
>
>S(A) = sum_f F(f)^2.

EXACTLY!

>(I basically wrote all that out in an attempt to describe the way I'm
>thinking about this, which may be completely off. Please correct me if
>I'm wrong.)

No, you're exactly right. But stop playing innocent:
you're carefully setting a trap and now you're about to
spring it. But I'm prepared for it!

> So my question is,

Yes...?

>if you are letting your action be (as you said)
>
>S(A) = sum_f F(f)^2

Yes....

>then are you assuming your 2-complex is actually a 2-cube complex?

NO!

I'm just using F(f)^2 to keep the calculations simple!

I'm trying to explain some stuff about path integrals
in lattice gauge theory, and the calculations will
work out easier if we take a really simple formula
for the action! HA! Pure convenience, that's all!

>For a more general 2-complex, e.g. a simplicial complex, then I would
>almost expect to see something like
>
>S(A) = sum_{f1 in F} sum_{f2 in F} g(f1,f2) F(f1) F(f2),
>
>where g(f1,f2) is some coefficient involving dot product of the
>normals of f1,f2 (or maybe the edges).

You're right, of course: if we wanted to solve YOUR
favorite problem, namely to get a nice version of electromagnetism
on a fairly general sort of 2-complex that reduces to the
usual Maxwell equations in the continuum limit, we would
need to make sure the formula for the action took into account
the *geometry* of how the 2-complex was sitting inside
Minkowski spacetime - or 4d Euclidean space, for what we're
doing here.

But, since I just want to explain some very general ideas about
path integrals and lattice gauge theory and cohomology, I
want to keep things very simple. So, I'm just going to
use F(f)^2 as my action, instead of the more complicated
quadratic expression you wrote down.

However, you will see that when all is said and done,
the formalism I'll describe will only rely on the fact that
the action is a gauge-invariant quadratic function of the
connection A. So, it will be easy to generalize what we're
doing to deal with the issue you mention! Honest! Wait and see!

So relax and stop trying to play Jedi mind tricks
to out-wizard the Wiz, like this:

In article <3fa8470f.03072...@posting.google.com>,


Eric A. Forgy <fo...@uiuc.edu> wrote:

>If I could perform some kind of Jedi mind-influencing trick across the
>distance, I would try to get you to consider non-cubic complexes :)

We ARE! We are considering an arbitrary 2-complex and
using a simple formula for the action that's designed to
make calculations easy for pedagogical purposes. Later,
if we feel like it, we can work with the action you describe.
It won't seriously affect the issues we're trying to study. Honest.

>In the spirit of Rudin and doing calculus "right", I've spent the last 7

>years trying to [...]

Yeah, yeah... now here comes the tale of your quest for the
holy grail, the HODGE STAR OPERATOR. We've heard it before.
And we'll hear it again.

....................................................................

Disclaimer: the argumentative tone taken in this post was
inserted solely to spice up what would otherwise be a dry-as-dust
discussion of boring topics in mathematical physics. We are
trying to keep up our readership. In fact Eric Forgy's point
is perfectly correct, and the ambitious reader should generalize
everything we do to the more general action he proposes. We
stick to the simple F(f)^2 action solely for pedagogical purposes.

Jeffery

unread,
Jul 26, 2003, 6:31:25 PM7/26/03
to sci-physic...@moderators.isc.org

ba...@galaxy.ucr.edu (John Baez) wrote in message news:<bejme5$les$1...@glue.ucr.edu>...

> Yes, let's do that! This sort of notational collision
> always happens whenever you combine ideas from two subjects.
> It's a real bummer, since physicists will throw a fit
> if you called a U(1) curvature anything but "F" - god knows why -
> while mathematicians will call it a sacrilege against Euler
> if you write his famous formula as anything other than V-E+F.
> And we will probably run into that formula sometime in this
> discussion, at least if we get far enough!

In one dimension, all you can have is a line segment, which always has
two vertices. In two dimensions, you have polygons, which always have
the same number of edges as vertices. A triangle has three edges and
three vertices. A square as four edges and four vertices. For
polygons, vertices minus edges is always zero. In three dimensions,
you have Euler's formula V-E+F=2, as you can see here.

http://www.math.ohio-state.edu/~fiedorow/math655/Euler.html

In four dimensions, you have polychrons, or 4D polytopes, which obey
the modified Euler eqiation V-E+F-C=0, which you can see here.

http://www.math.ohio-state.edu/~fiedorow/math655/HyperEuler.html

So, then putting this all together, you have the following.

1D -> V = 2

2D -> V - E = 0

3D -> V - E + F = 2

4D -> V - E + F - C = 0

where V is vertices, E is edges, F is faces, and C is cells, which are
the 3D "faces" of 4D polytopes. So there seems to be a pattern, where
with each additional dimension, you add an additional term to Euler's
formula, and the sign of the additional terms alternates minus, plus,
minus, etc., and the "answer" of the formula alternates 2, 0, 2, 0,
etc.

So does this pattern continue to hold for more than four dimensions?
Could you have a generalized Euler's formula that holds for any number
of dimensions? What if you have an infinite number of dimensions?

Jeffery Winkler

http://www.geocities.com/jefferywinkler

Ted Sung

unread,
Jul 29, 2003, 6:38:45 AM7/29/03
to
I have a basic question - when you drew the model spacetime as shown
below, how do you decide which way the arrows on the edges go? Does
the final answer depend on it? Are there multiple ways of specifying
the arrows and if so, does this invariance have any physical
implication?

Thanks,

Ted

Though not properly cited by Ted Sung, it was probably Derek Wise

John Baez

unread,
Jul 30, 2003, 12:15:22 AM7/30/03
to
In article <325dbaf1.03072...@posting.google.com>,
Jeffery <jeffery...@mail.com> wrote:

>1D -> V = 2
>
>2D -> V - E = 0
>
>3D -> V - E + F = 2
>
>4D -> V - E + F - C = 0
>
>where V is vertices, E is edges, F is faces, and C is cells, which are
>the 3D "faces" of 4D polytopes. So there seems to be a pattern, where
>with each additional dimension, you add an additional term to Euler's
>formula, and the sign of the additional terms alternates minus, plus,
>minus, etc., and the "answer" of the formula alternates 2, 0, 2, 0,
>etc.
>
>So does this pattern continue to hold for more than four dimensions?

Yes. If you take the n-sphere and chop it up into convex polytopes
in any way, you get a formula like this which gives 2 when n is even
and 0 when n is odd. (Note that your "1D" formula concerns the 0-sphere,
your "2D" formula concerns the 1-sphere, and so on.)

More generally, for any compact manifold there will be a magic number
like this called its "Euler characteristic". For example, the surface
of a doughnut has Euler characteristic 0: try chopping this surface into
convex polygons and computing V - E + F. More generally, the surface of
a g-holed doughnut has Euler characteristic 2-2g.

In algebraic topology we learn that the Euler characteristic of a space
is the alternating sum of the "ranks of the its rational homology groups",
and we learn how to compute these using any way of chopping up the
space into convex polytopes (or "cells"). Anyone who doesn't know this
stuff is missing out on most of the fun in life.

>Could you have a generalized Euler's formula that holds for any number
>of dimensions? What if you have an infinite number of dimensions?

What matters is not so much the dimension as the topology of the
space being chopped up into cells. Some spaces have an ill-defined
Euler characteristic because they either have infinitely many nonzero
rational homology groups (typical of infinite-dimensional spaces)
or rational homology groups of infinite rank (e.g. the surface of
a doughnut with infinitely many holes). I restricted attention to
"compact manifolds" above to eliminate both these problems. But
the Euler characteristic is defined for lots of spaces besides
compact manifolds.

It's also fun to try to compute the Euler characteristic of spaces
whose Euler characteristic is undefined! If the alternating
sum of ranks of rational homology groups diverges, you can still use
sneaky tricks to extract a finite answer, just like physicists get
finite and apparently sensible answers out of divergent integrals
in quantum field theory. In fact some of the same tricks work,
like "zeta function regularization". The neat thing is that one
can get non-integral Euler characteristics this way. The Euler
characteristic is really a generalization of cardinality that allows
negative numbers as values, so this other stuff is a further
generalization that allows non-integral cardinalities.

I've spent a lot of time thinking about this, and I'll give a talk
on it at the Fields Institute Program on Homotopy Theory and its
Applications, which is being held Sept. 20th-24th at the University
of Western Ontario:

http://www.math.uwo.ca/homotopy/

It's being organized by Rick Jardine and Dan Christensen, and
a lot of the world's experts on homotopy theory will be there,
as well as some jokers like me. So, it should be fun.

Here's the abstract of my talk:

Euler Characteristic versus Homotopy Cardinality

Just as the Euler characteristic of a space is the alternating sum
of the dimensions of its rational cohomology groups, the homotopy
cardinality of a space is the alternating product of the cardinalities
of its homotopy groups. There are very few spaces for which the
Euler characteristic and homotopy cardinality are both well-defined.
However, the two quantities have many of the same properties. In
many cases where one is well-defined, the other may be computed by
dubious manipulations involving divergent series, and the two then
agree. We give examples, describe applications of homotopy cardinality
to a topological version of Joyal's theory of "species", and beg
the audience to find some unifying concept which has both Euler
characteristic and homotopy cardinality as special cases.

One reason the topological version of Joyal's theory of
species should be interesting to physicists is that it provides
a combinatorial interpretation of Fock space, annihilation and
creation operators, and so on. James Dolan and I explained
this here:

From finite sets to Feynman diagrams, in Mathematics Unlimited -
2001 and Beyond, vol. 1, eds. Bjorn Engquist and Wilfried Schmid,
Springer, Berlin, 2001, pp. 29-50. Also available at
http://www.arXiv.org/abs/math.QA/0004133

In short, the Euler characteristic has a lot of life left in it!


Eric A. Forgy

unread,
Jul 31, 2003, 2:58:08 AM7/31/03
to sci-physic...@moderators.isc.org

"Phil" <ph...@weburbia.com> wrote:
> Usually spin half objects sit on vertices, spin one on edges etc., but the
> metric is spin two, so perhaps it will require vairables defined on at least 3
> dimensional simplices. Have you thought about that?

You bring up a good point. In computational EM, a "revolution" occured
when people realized that E and H degrees of freedom should be
associated with edges rather than nodes. What makes you think spin 1/2
objects should be associated with nodes?? I'd think spin 0 objects are
associated with nodes, spin 1 objects are associated with oriented
edges. Thinking back to the quantum gravity lectures, I'd be tempted
to think a spin 1/2 object should be associated with something in
between a node and an oriented edge, namely a directed edge (?).

I've be thinking of only integer rank tensors for so long, so please
forgive me if this is naive, but here is what I am thinking:

Node:
-----

o

Directed Edge:
--------------

----->-----

Oriented Edge:
--------------

----->-----
-----<-----

In other words, an oriented edge (meson? :)) is a pair of oppositely
directed edges. I don't think this is so whacky because a 2-component
spinor corresponds to a 4-component vector via Pauli spin matrices by
tensoring itself with it's adjoint (i.e. dual), i.e. if psi is a
2-component spinor let

V
= psi (x) psi^+
= V^0 e_0 + V^1 e_1 + V^2 e_2 + V^3 e_3

where V^i = 1/2 Tr(V e_i) and e_i is a Pauli spin matrix.

As we learned in the QG seminar, the dual of a directed edge
corresponds to an oppositely directed edge and placing two edges next
to each other corresponds to tensor product. This is precisely what
you do when you construct a vector from a spinor.^* (I hope I learned
that correctly.)

It seems natural to me that a spin 1/2 object should be associated
with directed strands weaving through the lattice. In the case of EM
(spin 1), you should then have an oppositely directed edge for each
directed edge in the lattice.

Maybe a "revolution" is needed for placing fermions on a lattice. From
what I understand (which is not much), there hasn't been too much luck
doing so so far.

Eric

* I recently learned this method of constructing vectors from spinors
in the context of classical EM polarization. Since an electromagnetic
plane wave is polarized transverse to its direction of propagation, it
is naturally describes as a two component spinor. If you go through
this procedure and determine the vector corresponding to this
(normalized) spinor, you get a point on a unit sphere. Conversely,
every point on a unit sphere corresponds to a state of polarization.
This sphere is dubbed the "Poincare sphere." Is there anything
Poincare DIDN'T know? :)

John Baez

unread,
Jul 31, 2003, 8:04:43 AM7/31/03
to
I've been very busy and very jetlagged here in Lisbon, so
I'm falling behind on this thread, but luckily it's marching
ahead quite nicely, so I'll just offer a few suggestions.

In article <c2e84040.03071...@posting.google.com>,
Derek Wise <dere...@yahoo.com> wrote:

>First, some notes on notation. Eric Forgy is quite correct to point
>out that the curvature in this theory isn't just a map from the set of
>faces to G -- it extends to linear combinations of faces. Similarly,
>a connection is a map from formal linear combinations of edges to the
>gauge group. We were being lazy, but I think his notation really does
>make things work out better. So here's the notation I'll start using
>(at least for now):
>
> V,E,P = sets of Vertices, Edges, and Plaquettes (or Faces)

> C_0 = formal linear combinations of elements of V


> C_1 = " " E
> C_2 = " "

This is all very nice. Next you should do this,
to the extent you haven't already:

1) Make these C_i's into a chain complex, with
a "differential" going down from C_i to C_{i-1}.

2) Make the dual vector spaces C^i = C*_i into a cochain
complex, with a differential going up instead of down.
Call this d and show d^2 = 0. This is the lattice version
of deRham theory.

3) Describe the connection A, the curvature F,
gauge transformations phi and the equations of
electromagnetism in this setup.

4) Show the action S(A) is a degenerate quadratic
form on the vector space C^1, but a NONDEGENERATE
quadratic form on a certain QUOTIENT SPACE of this -
the space of connections mod gauge transformations!

(Note that modding out by a subspace is more clever
than picking a complement to this subspace, since
it involves no arbitrary choices. Picking a complement
is called "gauge-fixing", but we're working with
gauge-invariant quantities, instead.)

5) Start doing some path integrals on this quotient
space. What's the measure?


Eric A. Forgy

unread,
Jul 31, 2003, 5:51:28 PM7/31/03
to
ba...@galaxy.ucr.edu (John Baez) wrote:
> Derek Wise <dere...@yahoo.com> wrote:
> >
> >First, some notes on notation. Eric Forgy is quite correct to point
> >out that the curvature in this theory isn't just a map from the set of
> >faces to G -- it extends to linear combinations of faces. Similarly,
> >a connection is a map from formal linear combinations of edges to the
> >gauge group. We were being lazy, but I think his notation really does
> >make things work out better. So here's the notation I'll start using
> >(at least for now):
> >
> > V,E,P = sets of Vertices, Edges, and Plaquettes (or Faces)
> > C_0 = formal linear combinations of elements of V
> > C_1 = " " E
> > C_2 = " "
>
> This is all very nice. Next you should do this,
> to the extent you haven't already:

Hi,

I'll take a stab at a few of these. I'm sure I won't say anything that
Derek doesn't already know or couldn't learn in about 15 minutes, so
maybe by volunteering I'll save him some grunt work so he can move on
to something more interesting :)

> 1) Make these C_i's into a chain complex, with
> a "differential" going down from C_i to C_{i-1}.

There is another option for terminology that I should have suggested
earlier. Since we are dealing with chain spaces C_i, we could call the
set of i-cells S_i so that

S_0 = V
S_1 = E
S_2 = P

Then C_i is the space of formal linear combinations of elements in
S_i.

The boundary map

@_i:C_i->C_{i-1}

takes an i-chain and returns its boundary, which is an (i-1)-chain.
There are various combinatorial ways to define the boundary map
depending on whether the cells are simplicial or cubic, etc, but a
picture is worth a thousand words


+-----+ --->--
| | | |
@_2 | ,-> | = n v
| `- | | |
| | | |
+-----+ ---<--


where the RHS is a linear combination of oriented 1-cells and


@_1 --->--- = -o +o


where the RHS is a linear combination of oriented 0-cells. You see (I
hope) that the (i-1)-cells inherit their orientation from the parent
i-cell.

One of the beautiful things in physics that has uncountably many
applications and should be tatooed on everyone's brain is, "The
boundary of a boundary is 0."

Let's illustrate this with our 2-cell above

+-----+ @_1 --->--- -o +o
| | | | +o -o
@_1 @_2 | ,-> | = @_1 n @_1 v = = 0.
| `- | | |
| | | | -o +o
+-----+ @_1 ---<--- +o -o


This will always be zero because the inheritted orientations of
@_{i-1} @_i always come in cancelling pairs (regardless of the degree
of the original chain).

Now we can formally write down the complex

@_0 @_1 @_{n-1} @_n
0 <---- C_0 <---- C_1 ... C_{n-2} <----- C_{n-1} <---- C_n


This sequence is said to be exact because when you traverse two stops,
you always get 0 (another way to say @_{i-1} @_i = 0), i.e.

imag(@_i) subset ker(@_{i-1}).

[Note: I wish I knew why this was important, but I don't. *mumble
something about homological algebra* :)]

As usual, we drop the subscript on @_i and simply write @ so that @^2
= 0.

> 2) Make the dual vector spaces C^i = C*_i into a cochain
> complex, with a differential going up instead of down.
> Call this d and show d^2 = 0. This is the lattice version
> of deRham theory.

The dual spaces C^i of i-cochains are (by definition of the dual
space) simply the space of linear functionals on chains. Let < ,
>_i:C^i x C_i -> R denote the evaluation map.

The coboundary map d_i:C^i->C^{i+1} is defined by

<d_i A,s>_{i+1} := <A,@_{i+1}s>_i

for A in C^i and s in C_{i+1}.

This can be thought of as the lattice version of the generalized
Stokes theorem.

We get our complex with the arrows the other way via

d_0 d_1 d_{n-2} d_{n-1} d_n
C^0 -----> C_1 -----> .... -----> C^{n-1} -----> C^n -----> 0


The fact that d_{i+1} d_i = 0 follows from the boundary of a boundary
is zero, i.e.

<d_{i+1} d_i A,s>_{i+2}
= <d_i A, @_{i+2} s>_{i+1}
= <A, @_{i+1} @_{i+2} s>_i
= 0.

As usual, d_i will be written simply as d so that d^2 = 0.

> 3) Describe the connection A, the curvature F,
> gauge transformations phi and the equations of
> electromagnetism in this setup.

In another post, I stated that

F = dA.

This agrees with the original definition of F via Stoke thm

<F,f1 + f2>
= <F,f1> + <F,f2>
= <dA,f1> + <dA,f2>
= <A,@f1> + <A,@f2>

which you can verify is the sum of Ai's as you traverse around the two
loops.

This also covers the gauge transformations

A' = A + dphi

so that

F = dA' = dA

since d^2 = 0.

Note that

<dphi,e1> = phi(t(e1)) - phi(s(e1))

as has been pointed out in several equivalent ways.

Resisting the desire to bring up the Hodge star (which is really
needed to make this definition physically valid), we can define an
inner product of i-cochains via

(A,B)_i := sum_{si in S_i) A(si) B(si) = sum_si Ai Bi.

Having an inner product, we can define a map #:C^i->C_i, which
basically takes the coefficients of A in C^i and uses them as
coefficients for a chain #A (in general this will be basis dependent
but we are ignoring that at the moment). For finite dimensional
complexes, # is invertible.

Therefore we have

(A,B)_i = <A,#B>_i.

For reasons that will be clear soon, we will be interested in the
adjoint of d. This can be computed fairly straightforwardly as

(dA,B)
= <dA,#B>
= <A,@#B>
= <A,# #^{-1}@# B>
= (A,d^+ B)

where

d^+ = #^{-1} @ #.

Now we can write down a slightly more generalized action

S(A) = 1/2 (F,F) - (A,j)

for some known 1-cochain j. The first half of Maxwell's equations are
simply

dF = 0

which is a tautology since F = dA and d^2 = 0. The other half of
Maxwell's equations follow from variation of the action:

varS(A)
= (varF,F) - (varA,j)
= (var dA,F) - (varA,j)
= (d varA,F) - (varA,j)
= (varA,d^+ F - j).

Setting varS(A) = 0 for all varA implies

d^+ F = j.

For the purposes of this thread (so far at least), we have j = 0 so
our Maxwell's equations up to this point can be written down as

=============

1.) d F = 0
2.) d^+ F = 0

=============

> 4) Show the action S(A) is a degenerate quadratic
> form on the vector space C^1, but a NONDEGENERATE
> quadratic form on a certain QUOTIENT SPACE of this -
> the space of connections mod gauge transformations!
>
> (Note that modding out by a subspace is more clever
> than picking a complement to this subspace, since
> it involves no arbitrary choices. Picking a complement
> is called "gauge-fixing", but we're working with
> gauge-invariant quantities, instead.)
>
> 5) Start doing some path integrals on this quotient
> space. What's the measure?

I'm exhausted and I wouldn't do these last two items justice anyway so
I'll bail out here.

Eric

A.J. Tolland

unread,
Jul 31, 2003, 6:31:23 PM7/31/03
to sci-physic...@moderators.isc.org


On Sat, 26 Jul 2003, Jeffery wrote:

> So does this pattern continue to hold for more than four dimensions?
> Could you have a generalized Euler's formula that holds for any number
> of dimensions? What if you have an infinite number of dimensions?

The formula will work decently well when everything in sight is
finite. It won't really work in infinite dimensions, because you won't be
able to figure out whether the sum should be 0 or 2. The sum won't be
well-defined in most cases anyways.

--A.J.


Aaron Bergman

unread,
Aug 1, 2003, 10:49:21 AM8/1/03
to
In article <3fa8470f.03073...@posting.google.com>, Eric A. Forgy
wrote:

>
>Now we can formally write down the complex
>
> @_0 @_1 @_{n-1} @_n
>0 <---- C_0 <---- C_1 ... C_{n-2} <----- C_{n-1} <---- C_n
>
>
>This sequence is said to be exact because when you traverse two stops,
>you always get 0 (another way to say @_{i-1} @_i = 0), i.e.
>
>imag(@_i) subset ker(@_{i-1}).

This is what makes it a complex. An exact sequence has img = ker.

>[Note: I wish I knew why this was important, but I don't. *mumble
>something about homological algebra* :)]

This is important because it means we can take the quotients of
these images and kernels to get (co)homology. That measures the
failure of the sequence to be exact.

Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>

Tim S

unread,
Aug 1, 2003, 5:46:36 PM8/1/03
to
on 14/7/03 9:14 pm, Gerard Westendorp at wes...@xs4all.nl wrote:

>
> Derek Wise wrote:
>
> [..]
>
>> As John mentioned, our model for spacetime is a finite
>> 2-complex: Roughly, this consists of finite sets V, E,
>> and F of Vertices, Edges connecting vertices, and
>> Faces whose boundaries consist of one or more edges.
>
>
> Ah, a generalization of a "circuit". I have been looking
> for the right terminology for this for some time.
>
> [..]
>
>
>> Finally, we are ready to compute the action of this
>> particular connection
>> on our model spacetime. It is:
>>
>> S(A) = F1^2 + F2^2.
>
>
> I don't quite understand this action. It seems that
> A is analogous to the vector potential, and F is analogous
> to the magnetic field. But the electromagnetic field
> normally involves also electric fields, and the action
> is something like B^2-E^2, which would be analogous to
> F^2 - (dA/dt)^2.
> Indeed, there is no time in your model so far.
>
> [..]

Indeed, it is (insofar as the terms have any meaning in a finite space like
this) two-dimensional and 'Euclidean' -- no time. Without time, there is no
electric field. With only two space dimensions, the magnetic field has only
one component.

>
>
>> What form does gauge invariance take in this theory?
>> It is clear that to change the connection without
>> affecting the curvature we need only ensure that the
>> sum of A(e) around each face is unchanged. For
>> example, we can make the following change:
>>
>> A4+t A7+3t
>> *--->---*---<----*
>> | | |
>> | | |
>> A1+t^ ^A3+t ^A6-t (t any real number)
>> | | |
>> | | |
>> *--->---*--->----*
>> A2+t A5-t
>
>
>
> You could make up a "potential" (phi) at each vertex
> such that t_ij= phi_i-phi_j. All conservative "t-fields"
> are gradients of a potential. So the amount of freedom here
> is equal to the amount of vertices. You could subtract 1
> form this because you can define one vertex to be at
> ground potential. So then the dimension of the gauge
> degrees of freedom is V-1, or in the example, equal to 5.

Yes...

>
>> So, we have discovered one degree of gauge freedom.
>> In fact, there are 4 more. I claim that there is one
>> degree of gauge freedom for every edge in a maximal
>> simply connected subgraph of (V,E). (Does this sound
>> right to you, John?)
>
>
> Is that the same as V-1? A triangle has 3 vertices and
> 3 edges. So according to V-1 it has 2 gdof's. But the
> number of edges is 3, so that would be in conflict.
>

_Simply connected_ subgraphs (i.e. no loops). You'd have to remove an edge
from your triangle to make it simply connected. It would then have 2 edges,
as expected.

I think this is the same as V-1. In a simply connected graph, there is one
face (the entire plane, excluding the graph), so from F+V-E = 2, we have
V-E = 1, hence E = V-1. The idea, I think, is that in a loop, one of the
edges doesn't contribute a gauge freedom because of the condition that the
holonomy of the connection around the loop is constant, fixing the value on
one edge if the values on the other edges are known. So you can open up the
loop by throwing away one of the edges, without reducing the number of gauge
degrees of freedom.

> [..]
>
>
>> Now this makes it easy to understand why the integral
>> in question is almost always infinity.
>
>
> I never understood path integrals. You sum over all
> paths, but not really. Some paths are excluded. Sometimes
> the endpoints are fixed, and sometimes not, but then the
> paths must keep the Hamiltonian fixed. The rules for this
> seem obscure to me.

Here, the 'paths' are all configurations of the A field. Maybe some
conditions will be imposed later?

Tim


Thomas Larsson

unread,
Aug 7, 2003, 7:19:27 AM8/7/03
to

ba...@galaxy.ucr.edu (John Baez) wrote in message
news:<bfravi$ro7$1...@glue.ucr.edu>...

> I've been fascinated by claims that lattice U(1) electromagnetism
> exhibits confinement (in certain dimensions, with certain values
> of the coupling constant). Is this the theory they're talking
> about when people make these claims? Can you sketch for us what
> happens in different dimensions in the strong and weak coupling
> regimes? You're not mentioning dimensions here, so I don't know
> if you're working with a 4d lattice or a lattice of arbitrary
> dimension. (I'm going to start Derek off on 2d lattice gauge
> theories, because they're exactly soluble and very pretty.)

Hmm. We do expect that a U(1) LGT in 4D gives Coloumb's law at weak
coupling, don't we? I.e. no confinement.

Recall that a spin model has two phases: the disordered, high-temperature
(HT) phase with correlation functions <phi(x)phi(y)> ~ exp(-|x-y|/xi),
where xi is the correlation length, and the ordered, low-temperature (LT)
phase with constant correlators. In a LGT, we also have
two phases, charactered by the behavior of the Wilson loop operator. In
the strong-coupling, HT phase <W> ~ exp(-A) (area law), and in the weak
coupling, LT phase <W> ~ exp(-P) (perimeter law). The area law means
confinement. To see this, consider a rectangular Wilson loop with temporal
length T and spatial length R. Since <W> ~ exp(V(R)T), the area law means
that the potential V(R) ~ R, i.e. confinement. OTOH, the perimeter law
means that V(R) = constant, i.e. no confinement. If you do things carefully
the perimeter law has a 1/R correction, which is Coloumb's law.

At sufficiently high temperature, any LGT is confining. So it can only
be non-confining if there is a LT phase and a phase transition inbetween.
Now, in 2D LGT's don't have phase transitions - if you pick a gauge where
the temporal links are unity, a 2D LGT is mapped into a 1D spin model.
But I don't think that abelian LGT's in higher dimensions are always
confining. The 3D Ising LGT with gauge group Z_2 is definitely not, since
it is dual to the 3D Ising model which does have a phase transition. OTOH,
Z_2 is a discrete group and thus somewhat special. I have seen arguments
that 4D LGT's are somehow related to 2D spin models, which don't have
an ordered phase if the symmetry in continuous, but I never understood
this argument.

Two references that I found useful are

J. B. Kogut:
"An introduction to lattice gauge theory and lattice systems"
Rev. Mod. Phys. (1977) 659 - 713

J. B. Kogut:
"The lattice gauge theory approach to quantum chromodynamics"
Rev. Mod. Phys. (1983) 775 - 836

They are evidently somewhat old, but I don't think the basics has
changed very much since then. And they are written by one of the field's
pioneers; Kogut was one of Ken Wilson's first students, I believe.

> Some poor soul not cited by Larsson wrote:

> >Anyway, my point is that a strongly coupled U(1) theory (with
> >nonrenormalizable terms) gives a nearly identical partition function as
> >a Nambu-Goto theory of closed oriented strings!

Polykov wrote two papers in 1979-1980 about a formulation of gauge theory
as a non-linear sigma model in loop space, and in 1981, he came up with
the Polyakov action of string theory. So there seems to be a natural line
of thought here. OTOH, already Faraday formulated electricity (without
magnetism in those days) using flux-lines.

Eric A. Forgy

unread,
Aug 7, 2003, 7:19:28 AM8/7/03
to

ba...@galaxy.ucr.edu (John Baez) wrote:

> Eric Forgy wrote:

> > My first naive question...

> See, folks? See how he's playing innocent?
> "My first naive question", he says...
> I can already tell he's up to something!

*chuckle* :)

[snip]



> However, you will see that when all is said and done,
> the formalism I'll describe will only rely on the fact that
> the action is a gauge-invariant quadratic function of the
> connection A. So, it will be easy to generalize what we're
> doing to deal with the issue you mention! Honest! Wait and see!

Ok. This is all I wanted to hear. In my original post I asked:

> > Is there something about QED that makes this issue unimportant?

The fact that the only important thing (at least in order to be able
to generalize) is that the action be a "gauge-invariant quadratic
function of the connection A" is reassuring and it actually helps me
add another criterion in my quest.

> Yeah, yeah... now here comes the tale of your quest for the
> holy grail, the HODGE STAR OPERATOR. We've heard it before.
> And we'll hear it again.

Well, at least I will try to refrain from beating the issue to death
in this thread. I am still very enthusiastically following along and
don't want to pollute things with my agenda. On the other hand, you
have to admit, Derek was the first to bring up Hodge duality. Maybe he
will catch the bug! :)

By the way, since the "you know what" has been such a thorn in my
sides, I've recently been chatting with Kotiuga and have looked at
reproducing his helicity stuff and it worked out beautifully. My very
first result from all my effort! I'm now trying to learn Chern-Simons
theory (on a lattice) using noncommutative differential forms. The
chalenge so far for me is defining a lattice connection D satisfying

D(f cup a) = df cup a + f cup Da.

I hope I can tie all this stuff into what you guys are doing in this
thread somehow.

Eric

John Baez

unread,
Aug 8, 2003, 7:07:01 AM8/8/03
to
In article <3fa8470f.03072...@posting.google.com>,

Eric A. Forgy <fo...@uiuc.edu> wrote:

>Maybe a "revolution" is needed for placing fermions on a lattice.

You say you want a revo-lu-tion, yeah-eah, you know...

>* I recently learned this method of constructing vectors from spinors
>in the context of classical EM polarization. Since an electromagnetic
>plane wave is polarized transverse to its direction of propagation, it
>is naturally describes as a two component spinor. If you go through
>this procedure and determine the vector corresponding to this
>(normalized) spinor, you get a point on a unit sphere. Conversely,
>every point on a unit sphere corresponds to a state of polarization.

Right!

>This sphere is dubbed the "Poincare sphere."

I've never heard it called that. I've always heard it
called the "Riemann sphere" or the "heavenly sphere" or
the "celestial sphere". For a description of how this works
in spacetimes of dimensions 3, 4, 6 and 10, check this out:

http://math.ucr.edu/home/baez/Octonions/node11.html

>Is there anything Poincare DIDN'T know? :)

No. Well, he didn't quite seem to fully understand
the importance of special relativity, though he was
damn close even before Einstein.


Eric A. Forgy

unread,
Aug 8, 2003, 8:23:54 AM8/8/03
to

te...@intex.com (Ted Sung) wrote in message:

> I have a basic question - when you drew the model spacetime as shown
> below, how do you decide which way the arrows on the edges go? Does
> the final answer depend on it? Are there multiple ways of specifying
> the arrows and if so, does this invariance have any physical
> implication?

Hi Ted,

You can think of the ei's as basis vectors (because they ARE in an
abstract sense) and the Ai = A(ei) are the components of some vector
(cochain actually) A. If you adopt the sub/superscripts, then I'd
write these as e_i and

A_i = A(e_i).

Then you can write

A = A_i e^i,

where e^i are dual basis vectors defined by how they act on the
original basis vectors, namely

e^i(e_j) = delta^i_j.

If you change the sign of e_i, i.e. reverse the arrow, that is going
to do nothing more than change the sign of A_i and the physical object
A remains unchanged.

This is in precise analogy with vectors in R^n. If change the basis,
it changes the components accordingly and the vector itself is
unchanged.

On the other hand, as you might have noticed, each cell of different
dimension has its version of an "arrow". A point has a +/-. An edge
has an arrow "this way" or "that way". A plaquette has a loop
clockwise or counter clockwise. A cube has a right or left-handed
corkscrew. Etc. These are all referred to as orientations. If you look
at the highest dimension cell in the complex, then there is a way to
specify whether adjacent cells have consistent orientations. For
example, in our model, it is possible to define consistent
counterclockwise orientations on each 2-cell. If this is possible, the
complex is orientable. Whether a complex (spacetime) is orientable or
not may have some physical consequences, but this refers only to the
highest dimension cells.

Eric

PS: Since I just had a painful flashback from a past thread, let me
just warn you not to confuse "orientable" and "oriented." The wizards
don't like that :)

A.J. Tolland

unread,
Aug 8, 2003, 8:39:45 AM8/8/03
to


On Thu, 31 Jul 2003, Eric A. Forgy wrote:

> Now we can formally write down the complex
>
> @_0 @_1 @_{n-1} @_n
> 0 <---- C_0 <---- C_1 ... C_{n-2} <----- C_{n-1} <---- C_n
>
>
> This sequence is said to be exact because when you traverse two stops,
> you always get 0 (another way to say @_{i-1} @_i = 0), i.e.
>
> imag(@_i) subset ker(@_{i-1}).
>
> [Note: I wish I knew why this was important, but I don't. *mumble
> something about homological algebra* :)]

You've got your definitions a little wrong.
A sequence is exact if imag(@_i) _equals_ ker(@_{i-1})
What you've done above is demonstrate that C = {C_i, @_i} forms a
chain complex. The nice thing about complexes is that, because
imag(@_{i+1}) subset ker(@_i), you can define the homology groups H_i(C)
= ker(@_i)/imag(@_{i+1}). The nice thing about exact sequences is that
they are automatically chain complexes with vanishing homology groups.

--A.J.


Gerard Westendorp

unread,
Aug 8, 2003, 11:00:41 AM8/8/03
to
Eric A. Forgy wrote:
[..]

> Maybe a "revolution" is needed for placing fermions on a lattice. From
> what I understand (which is not much), there hasn't been too much luck
> doing so so far.


I figured out one way to do it, inspired on the Hestenes stuff with
his Clifford algebra's.

scalars correspond to vertecices (V)
vectors correspond to edges (E)
bi-vectors correspond to f(Ph)aces (P)
tri-vectors correspond to cells (C)
etc.

So there are 1+3+3+1= 8 dynamic components in an Euclidean 3-complex.
The complex doesn't have to be cubical, but time must be discretized
separately, or can be considered continuous.

You can build
the massless Dirac equation with the 8 components of a bi-spinor
written out into real and imaginary componenrts.

d/dt V = sum of incoming E's
d/dt E = sum of coinciding P's + V(begin) - V(end)
d/dt P = sum of defining E's + C(front) - C(back)
d/dt C = oriented sum of surrounding P's

After some rewriting, this can actually be written as a the
discretized Dirac massless equation. I think I also have a trick
for bringing in mass.

What I have not succeeded in doing yet is coupling the Dirac
equation to the Maxwell equation. Then we could start up some
really cool simulations...


Gerard


Gerard


John Baez

unread,
Aug 10, 2003, 4:45:38 AM8/10/03
to
In article <3fa8470f.03073...@posting.google.com>,

Eric A. Forgy <fo...@uiuc.edu> almost wrote:

>One of the beautiful things in physics that has uncountably many
>applications and should be tatooed on everyone's brain is, "The
>boundary of a boundary is 0."

Right! In the present context, this is the ultimate reason
two vector potentials differing only by a gauge transformation
give the same electromagnetic field. That fact about cochains
is just dual to this geometrically obvious fact about chains!

>im(d_i) subset ker(d_{i-1}).


>
>[Note: I wish I knew why this was important, but I don't.

Well, this is just another way of saying "the boundary of
a boundary is zero". And you just said that has uncountably
many applications and should be tattooed on everyone's brain.

So, you DO know why it's important!

>[mutter something about homological algebra* :)]

Oh. It's not really so hard. The boundary of a
boundary must be zero, but the converse can fail: something
can have boundary zero even though it's not a boundary.
For example, the outer edge of a square that has a hole
poked in it.

So, in homological algebra you study the quotient space

ker(d_{i-1}) / im(d_i)

to see how many chains have boundary zero even though
they're not themselves a boundary! The dimension of this
vector space counts the number of i-dimensional holes.

In short: homological algebra is the holiest branch of
mathematics, and it applies to the physics of things that
have holes in them.

>This also covers the gauge transformations
>
>A' = A + dphi
>
>so that
>
>F = dA' = dA
>
>since d^2 = 0.

Right! Much wonderful stuff is concealed in this
pitifully simple calculation. But if we're working
or a space or spacetime with holes in it, we can
have dA' = dA even though A' is not A + dphi. In
other words, two vector potentials can give the same
electromagnetic field even though they don't differ
by a mere gauge transformation. This is a place where
homological algebra comes into physics - it gives rise
to the Bohm-Aharonov effect.

>Resisting the desire to bring up the Hodge star [...]

Good! Keep it up! :-)

Now I really want Derek to start computing some path
integrals...


Gerard Westendorp

unread,
Aug 10, 2003, 8:22:34 PM8/10/03
to sci-physic...@moderators.isc.org

Derek Wise wrote:

[..]


>>and F is analogous
>>to the magnetic field. But the electromagnetic field
>>normally involves also electric fields, ...
>>
>
> Not quite. The F really is the *electromagnetic* field.
> It's a rank-2 tensor that includes all the components of
> the electric and magnetic fields.


OK, so you have chopped up time as well as space. Interesting.

What I don't get is how you can tell the difference between
lets say a (2+0) dimensional magnetic field and a (1+1) dimensional
electromagnetic field.
In other words, how do get the minus sign in the action (E^2-B^2)
when a space-space (B) loops seems to look just like a space-time
loop (E)?

> Our "2-complex" really is supposed to be *spacetime*, and
> not just space. Honest! :)


Would it still be OK to call something an "n-complex" if
it only discretizes space, and not time?

[..]


> Gauge dof = [# of vertices] - [# of connected components].
>
> This is then equivalent to my method of computing the dof.
>

Cool, I hadn't thought of that. If the components are disconnected
then each can have its own local ground potential. (The engineers
way of saying it)

Gerard


John Baez

unread,
Aug 11, 2003, 2:31:33 AM8/11/03
to
In article <4b8cc0a6.03072...@posting.google.com>,
Thomas Larsson <thomas....@hdd.se> wrote:

>At sufficiently high temperature, any LGT is confining.

Oh? How does this square with the widespread belief
that above ~2 trillion kelvin, quantum chromodynamics
*ceases* to be confining and we get a new phase called
a quark-gluon plasma?

Maybe you meant "coupling" instead of "temperature"?

I may have something more intelligent to say about your
long and interesting post after I get this cleared up.


John Baez

unread,
Aug 12, 2003, 2:36:56 AM8/12/03
to
I haven't heard from Derek for a long time, and I'm beginning
to fear that he's fallen into a black hole. This is a shockingly
common fate for students of quantum gravity. I'm sad to say
that two other grad students of mine have fallen into black holes
this summer.... I hope Derek isn't the third.

As a kind of SOS, I will post a bit more about what we are trying
to do. We are trying to quantize electromagnetism on a lattice.

By "electromagnetism" I mean the vacuum Maxwell equations, suitably
discretized.

By a "lattice" I mean a 2-complex, as defined - a bit vaguely -
earlier in this thread.

By "quantize" I mean we're trying to do Euclidean path integrals,
treating electromagnetism as a gauge theory with gauge group R.
We realized some time ago that the path integrals diverge due to the
gauge-invariance of the action, and now we're trying to solve that
problem. There's a very pretty way to solve it.

Here was the game plan:

In article <bfp2j1$qsf$1...@glue.ucr.edu>,
John Baez <ba...@galaxy.ucr.edu> wrote:

> Derek the Wise wrote:

>> V,E,P = sets of Vertices, Edges, and Plaquettes (or Faces)
>> C_0 = formal linear combinations of elements of V
>> C_1 = " " E
>> C_2 = " "

>This is all very nice. Next you should do this,
>to the extent you haven't already:
>
>1) Make these C_i's into a chain complex, with
>a "differential" going down from C_i to C_{i-1}.

We've pretty much beaten this to death, with lots of help from
Eric Forgy. So, on to the next step!

2) Make the dual vector spaces C^i = C*_i into a cochain
>complex, with a differential going up instead of down.
>Call this d and show d^2 = 0. This is the lattice version
>of deRham theory.

This is pretty easy: you just let C^i be the dual of
the vector space C_i and take the adjoint of the differential

@_i : C_i -> C_{i-1}

to get the differential

d_{i-1}: C_{i-1} -> C_i

We call @_i just "@" for short, and d_i just "d".

Taking the adjoint of the equation

@^2 = 0

which says that C_i is a "chain complex", we get

d^2 = 0

which says that C^i is a "cochain complex".

We are now off and running! Let us see how this algebra
becomes physics!

>3) Describe the connection A, the curvature F,
>gauge transformations phi and the equations of
>electromagnetism in this setup.

Eric has done this. The most basic important part
is that the connection or "vector potential" A lives in C_1
(it's a "1-form"), while
the curvature or "electromagnetic field" F lives in C_2
(it's a "2-form"), and

F = dA

A gauge transformation phi lives in C_0
(it's a "0-form"), and
gauge transformations act on connections via:

A |-> A + d phi.

We say that A and A + d phi are "gauge-equivalent".
Gauge-equivalent connections have the same curvature
(in this theory - not always!), since

d(A + d phi) = dA

thanks to the magic equation

d^2 = 0.

>4) Show the action S(A) is a degenerate quadratic
>form on the vector space C^1, but a NONDEGENERATE
>quadratic form on a certain QUOTIENT SPACE of this -
>the space of connections mod gauge transformations!

I hope Derek hasn't been keeping quiet because he's been unable
to prove this in general. It's only true under some conditions!
Let's work it out.

The action is defined by

S(A) = sum_{p in P} F(p)^2

That is, it's the sum over all plaquettes of the square
of the electromagnetic field flowing through that plaquette.

The action depends quadratically on A so we call it a "quadratic form".

It's obvious that S(A) vanishes iff F = 0. Thus there are
nonzero connections A with F = 0 iff there are nonzero
connections with S(A) = 0 - and when this happens, we say
the action is a "degenerate" quadratic form.

When we try to do a path integral like

integral exp(-S(A)) dA

we get infinity when the action is a degenerate quadratic
form. We'll see this explicitly as soon as Derek reminds us
of the formula for this integral. So, it's a nuisance when
S(A) is degenerate and we want to cure this problem somehow!

One way to get a nonzero connection A with dA = 0 is to take

A = d phi,

thanks to the magic equation d^2 = 0. This suggests that
the "reason" S(A) might be degenerate is the existence of
gauge transformations.

But in fact there could be nonzero connections with F = 0
that weren't of the form A = d phi.

In other words,

{connections A with dA = 0}

is a subspace of

{connections A with A = d phi}.

The first space contains ALL the problematic connections
(the ones whose action is zero), while the second only
contains SOME (the ones that are gauge-equivalent to zero).
This leads to some interesting subtleties.

The difference in size of these two spaces is measured by
taking the big one mod the little one, giving something
called the "first deRham cohomology":

H^1 = {connections A with dA = 0} / {connections A with A = d phi}

or in more terse language

H^1 = ker(d_1)/im(d_0).

We have

H^1 = 0

precisely when

{connections A with dA = 0} = {connections A with A = d phi}.

Can Derek think of some examples of 2-complexes where
this is true? How about some examples where it's not?

If we take the space of connections and mod out by all
the problematic ones, we get a space on which the action
is a well-defined *nondegenerate* quadratic form:

{connections}/{connections A with dA = 0}

or in more terse language

C^1/ker(d_1)

We can do path integrals here without getting infinities.

On the other hand, there are physical arguments saying
it's more justified to mod out by connections that are
gauge-equivalent to zero, since gauge-equivalent connections
are widely agreed to be "physically the same". If we
do this, we get the bigger space

{connections}/{connections A with A = d phi}

or in more terse language

C^1/im(d_0).

The action is a well-defined quadratic form on this bigger space
(since we're only identifying connections with the same action),
but it MAY NOT BE NONDEGENERATE, since we may not have
modded out by ALL the problematic connections!

The action will be nondegenerate on this bigger space

{connections}/{connections A with A = d phi}

precisely when ALL the problematic connections are
gauge-equivalent to zero, i.e.

H^1 = 0.

In this case, the bigger space is just the same as the
smaller space

{connections}/{connections A with dA = 0}

and we are happy.

But, when is H^1 = 0?

Once we settle this we can charge ahead and actually
start DOING some path integrals under the assumption that
H^1 = 0.

(Interestingly, when we go to the U(1) version of electromagnetism
this assumption will not be needed. Even with gauge group R,
we can still do path integrals when H^1 is nonzero as long as
we are computing an integral on the space

{connections}/{connections A with dA = 0}

e.g. when we are computing the expectation value of an
observable that doesn't change when you add to the connection
some 1-form A with dA = 0. Lovers of subtlety might enjoy
pondering which observable are like this, and which aren't.)

Eric A. Forgy

unread,
Aug 12, 2003, 6:17:58 PM8/12/03
to
Gerard Westendorp <wes...@xs4all.nl> wrote:
> Derek Wise wrote:

> What I don't get is how you can tell the difference between
> lets say a (2+0) dimensional magnetic field and a (1+1) dimensional
> electromagnetic field.
> In other words, how do get the minus sign in the action (E^2-B^2)
> when a space-space (B) loops seems to look just like a space-time
> loop (E)?

Shh! We're not supposed to be asking questions like this! :)

The answer involves the Hodge star and metric tensor. We are ignoring
this for the time being with the understanding that anything we do can
be extended to the actual physically realistic/interesting cases later
on.

[snip]

> Would it still be OK to call something an "n-complex" if
> it only discretizes space, and not time?

A cell complex (as we are using it here) is basically a way to
discretize a smooth manifold. If we let M denote our spacetime
manifold, then we can either cook up a cell complex discretizing M
directly, or we can define a global time axis and write

M = S x R

where S is a smooth manifold representing space and R is a
parameterization of time. Doing this, we can cook up a cell complex
representing S only and keep time a continuum.

In other words, the definition of a cell complex is sufficiently
general to handle just about anything you can dream of :)

As a reminder, I suggest referring to a cell complex representing
n-dimensional space as an "n-complex" and a cell complex representing
(n+1)-dimensional spacetime (with time aligned with time edges) as an
"(n+1)-complex". Later on, if we become interested in spacetime
complexes where there are no edges aligned with time, then we might
distinguish "static" vs "dynamic" (n+1)-complexes. If the edges are
not time aligned, then you can think of the nodes as being in motion.
A time-aligned edge is likewise a node at rest.

Best regards,
Eric

Derek Wise

unread,
Aug 15, 2003, 5:40:50 PM8/15/03
to
John Baez wrote:
> I haven't heard from Derek for a long time, and I'm beginning
> to fear that he's fallen into a black hole. This is a shockingly
> common fate for students of quantum gravity. I'm sad to say
> that two other grad students of mine have fallen into black holes
> this summer.... I hope Derek isn't the third.

No, no... nothing that serious. I only fell into a *wormhole* that
led me to a region of the universe WITHOUT INTERNET CONNECTIONS!
Can you imagine? It took me a while to get back -- hence my email
silence. It was actually a rather fortunate mishap, though,
because the wormhole got me thinking about topological things that
can happen with 2-complexes, which will be part of the content of
this post! :) In fact, maybe I'll jump straight ahead to that
stuff first.

> H^1 = ker(d_1)/im(d_0).
>
> We have
>
> H^1 = 0
>
> precisely when
>
> {connections A with dA = 0} = {connections A with A = d phi}.
>
> Can Derek think of some examples of 2-complexes where
> this is true? How about some examples where it's not?

Sure he can! Thinking of examples where every closed connection
is exact is easy. In fact, I think we essentially showed that this
was the case for the example we've been using earlier in this
thread. So how about a complex where we can have a closed
connection that is not exact? Taking a hint from regular 2d
electromagnetism, we might suspect something about simple
connectivity. There's a really simple example that is unfortunately
not so simple to draw in ASCII. I'll give it a shot anyway:

--------v1-------
/ | \
/ ^e1 \
| | |
| p -v2- ^e3
| / \ |
| | | |
| | ^e2 |
| | | |
| \ / |
| \____/ |
\ /
\ /
\_______________/

This should look (if you squint hard) somewhat like an
annulus with a cut in it at the top. Note there are 2 vertices,
3 edges, and only one plaquette (p) -- there is no plaquette
whose boundary is just e2. Now to see that this admits a
closed but not exact connection, write A(ei)=Ai and then the
condition dA=0 is just A3-A1-A2 = 0. This clearly has lots of
solutions where A2 is not zero. But if A were exact then we
would have A2=phi(t(e2))-phi(s(e2))=phi(v2)-phi(v2)=0, so A
isn't exact.

So, to do path integrals with the assumption that H^1 = 0, it seems
we need to consider only 2-complexes that don't have "holes" of
this kind. I.e. as long as we use 2-complexes without any "missing
plaquettes," we are safe to use the space "connections mod gauge
transformations" in doing path integrals instead of "connections mod
closed connections" (which doesn't have as nice a ring to it).

GAUSSIAN INTEGRALS
------------------

Very early in this thread, John had me explain why we get infinity
when we naively do certain path integrals over the space of all
connections. Since we seem close to fixing this problem by modding
out by all the gauge equivalent connections (as long as our lattice
has the right topology), John seems to be prompting me to review how
we actually *do* this integral:

> When we try to do a path integral like
>
> integral exp(-S(A)) dA
>
> we get infinity when the action is a degenerate quadratic
> form. We'll see this explicitly as soon as Derek reminds us
> of the formula for this integral.

The measure here, dA, is just Lebesgue measure on R^E. That is, we
really have a multiple integral over E copies of R -- one for each
edge in the lattice. Since our action is a quadratic function of the
Ai, we can write it as a matrix equation:

S(A) = M^i_j A^j A_i

or, / \ /A1\
| | |A2|
S(A) = (A1 A2 A3 ... AE)| M | | .|
| | | .|
| | | .|
\ / \AE/

where M is a *symmetric* ExE matrix.

Now to the integral. Here's the magic formula (which I'm probably not
going to explain until pressed to do so):

int_{R^E} exp(-S(A)) dA = sqrt(pi^E/det(M)).

The problem, as you might guess, is that M generally has determinant
zero. So there we have it -- a nice review of the wrong way to
calculate path integrals. Now, though, we can try to use the same
method, appropriately fixed-up by using our new quotient space. I
should try and say some things about this now, but I have other stuff
I need to work on, so it'll have to wait. Soon.

DeReK

PS: To send me spam, please use the address in the header.
Otherwise you can email using my first name at the domain math.ucr.edu

Derek Wise

unread,
Aug 15, 2003, 5:45:37 PM8/15/03
to
PATH INTEGRALS ON A LATTICE
---------------------------

OK. Today, as promised, I will finally calculate a path integral in
lattice electromagnetics. The example we have been looking at so
far (the one with two square plaquettes sharing an edge) was great
for building up the theory, but now that we're actually doing
calculations it's nice to have some even simpler examples. So this
post will be concerned entirely with the following extremely tiny
spacetime:


----------v1---------
/ | \
/ | \
/ | \
| | |
| | |
^ e1 V e2 ^ e3
| | |
| ===> | <=== |
\ p1 | p2 /
\ | /
\ | /
---------v2---------

This spacetime has:
2 vertices: V={v1,v2}
3 edges: E={ e1:v2-->v1 , e2:v1-->v2, e3:v2-->v1 }
2 plaquettes: P={ p1:e1==>e2* , p2:e3*==>e2 }

So, the first integral we want to do is the partition function:

int exp(-S(A)) DA.

I think I'll do this two ways. First, I'll do it the wrong way,
the way that gives the answer infinity. Then we can go back and
fix the problem.

I. The wrong way:
-----------------

A connection A lives in C^1, also known as R^E in the present
context. We have three edges, and we want to integrate over all
connections, so we have the following triple integral:

/ / /
| | | exp(-S(A)) dA1 dA2 dA3
/ / /

Let's work out what the action is in the integrand:

S(A) := Sum_{p in P} F(p)^2
= Sum_{p in P} [dA(p)]^2
= Sum_{p in P} [A(t(p))-A(s(p))]^2
= (-A2-A1)^2 + (A2+A3)^2
= A1^2 + 2 A2^2 + A3^2 + 2 A1 A2 + 2 A2 A3

[ 1 1 0 ] [ A1 ]
= (A1 A2 A3)[ 1 2 1 ] [ A2 ]
[ 0 1 1 ] [ A3 ]

Recalling the formula from my previous post, we can now evaluate
the integral:

/ / / / pi^3 \
| | | exp(-S(A)) dA1 dA2 dA3 = sqrt | ------ |
/ / / \ det(M) /

where M is the matrix in the above expression for S(A). Now
evaluate the determinant of M, and you'll see where the problem is
with this approach.

II. The fixed-up way:
----------------------

First of all, we need to note that this particular 2-complex has
H^1=0, i.e. every closed connection is exact, i.e. every connection
with dA=0 has A=dphi for some phi. Since I posted stuff about this
yesterday, I won't discuss it here except to say we are safe to do
path integrals over the space "connections mod gauge
transformations" without worrying about further degeneracy.

So, let's form this quotient space. We'll let

J:= im(d_0) = {d(phi) | phi in C^0}
= {(phi1-phi2,phi2-phi1,phi1-phi2) | phi1,phi2 in R}
= {(a,-a,a) | a in R}

Then the quotient space C^1/J is the set of all cosets A+J with A a
connection in C^1. I.e.

C^1/J = R^3/{(a,-a,a)} = { A + {(a,-a,a)} | A in R^3}

Since in our case everything in sight just looks like cartesian
products of R, the quotient space inherits a norm in the natural
way:

|| A+J || := inf_{B in J} || A+B ||.

What does this look like? It's just the Euclidean distance from the
subspace J to the point A. This leads us to an interesting point.
R^n is not just a vector space but an inner product space, which
means we have a sensible notion of orthogonality. It's not too hard
to see (by drawing a picture if necessary) that the norm on C^1/J
just assigns to A+J the norm of the orthogonal projection of A onto
the orthogonal complement of J, J^{perp}. What does this mean?
It means that using the natural Lebesgue measure on the Euclidean
space C^1/J = R^3/R is just the same as my earlier suggestion of
gauge fixing by integrating over the orthogonal complement of the
subspace determined by gauge transformations!

We know that C^1/J is R^3/R, which is isomorpic to R^2. We have
only to determine which R^2 subspace it is. Using the above
observation about orthogonality, we can pick any basis of J^{perp}.
Thus,

C^1/J = span { (1,1,0),(1,-1,-2) }
= { (x1+x2 , x1-x2 , -2x2) | x1,x2 in R } =~ R^2


Now let's calculate the curvature, action, and finally the path
integral.

We define the curvature on connections mod gauge transformations as
F(A+J)=F(A).
It's easy to check that this is well defined (i.e. not dependent on
which element of the coset A+J one uses). Then we calculate the
action in the natural way:

S(A+J)=Sum_{A+J in C^1/J} F(A+J)^2
=(-A1-A2)^2 + (A2+A3)^2
=(x1-x2-x1+x2)^2 + (x1-x2-2x2)^2
=(-2 x1)^2 + (x1-3x2)^2
=5 x1^2 - 6 x1x2 +9 x2^2

=(x1 x2)[ 5 -3 ] [x1]
[-3 9 ] [x2]

Now we can pull out our magic formula again and do the path
integral:

/ / / pi^2 \
| | exp(-S(A)) dx1 dx2 = sqrt | ------ | .
/ / \ det(M) /

This time, though, the determinant of M is not zero but 36, so we
get pi/6.

Eric A. Forgy

unread,
Aug 18, 2003, 12:36:26 AM8/18/03
to
Welcome back! :)

dere...@yahoo.com (Derek Wise) wrote:

> Now to the integral. Here's the magic formula (which I'm probably not
> going to explain until pressed to do so):
>
> int_{R^E} exp(-S(A)) dA = sqrt(pi^E/det(M)).

*press*

Please explain! That is too cool to let slip by :)

I can see it is something like (probably exactly like) integrating a
Gaussian

I = int_R exp(-x^2} dx

I^2
= int_R^2 exp(-x^2-y^2) dx dy
= 2*pi*int_0^inf exp(-r^2) r dr
= pi*int_0^inf exp(-u) du
= pi

Therefore

I = sqrt(pi)

Uh! Uh! I can almost see it! :)

I suck in linear algebra, but I'm guess that if M is symmetric it can
probably be decomposed into the product of a matrix and its transpose
like

M = T^t T

Then if we define

A' = T(A)

then

S(A) = A^t M A = A^t T^t T A = A'^t A'

and

exp(-S(A)) = exp(-A1'^2-A2'^2-...-AE'^2)

which is just a product of E Gaussians.

Then

I
= int_{R^E} exp(-S(A)) dA
= int_{R^E} exp(-\sum_i Ai'^2) 1/det(T) dA'
= 1/det(T) prod_i [int_R exp(-Ai'^2) dAi']
= 1/det(T) prod_i sqrt(pi)
= 1/det(T) * sqrt(pi^E)

However, since M = T^t T we have

det(M) = det(T)^2

so that

I = sqrt(pi^E/det(M))

The obvious questionable step I see is

dA = 1/det(T) dA'

The other way makes perfect sense because

dA' = dA1'/\.../\dAE' = det(T) dA1/\.../\dAE,

but to get dA = 1/det(T) dA' requires that already T must be
invertible, which assumes det(M) != 0. I guess it is alright. On the
bright side, it should be easy to correct my work having it spelled
out like this :)

Eric

Phil

unread,
Aug 18, 2003, 5:29:53 PM8/18/03
to

"Eric A. Forgy" <fo...@uiuc.edu> wrote in message
news:3fa8470f.03072...@posting.google.com...

>
> "Phil" <ph...@weburbia.com> wrote:
> > Usually spin half objects sit on vertices, spin one on edges etc., but
the
> > metric is spin two, so perhaps it will require vairables defined on at
least 3
> > dimensional simplices. Have you thought about that?
>
> You bring up a good point. In computational EM, a "revolution" occured
> when people realized that E and H degrees of freedom should be
> associated with edges rather than nodes. What makes you think spin 1/2
> objects should be associated with nodes?? I'd think spin 0 objects are
> associated with nodes, spin 1 objects are associated with oriented
> edges. Thinking back to the quantum gravity lectures, I'd be tempted
> to think a spin 1/2 object should be associated with something in
> between a node and an oriented edge, namely a directed edge (?).

When I said "usually" I was thinking of my own narrow experience of doing
lattice QCD, where the fermions sit on the vertices and the gauge group
on the links. I agree that it can be done in other ways.

> ...


> As we learned in the QG seminar, the dual of a directed edge
> corresponds to an oppositely directed edge and placing two edges next
> to each other corresponds to tensor product. This is precisely what
> you do when you construct a vector from a spinor.^* (I hope I learned
> that correctly.)

I have another way of using this tensor product which supports my case.
A fermionic field on the vertices forms a vector with components indexed by
the vertices. The gauge fields form a sparce matrix over the same vector
space.
The vector space of these matrices is contained in the tensor product of two
of the vector fields. This is consistent with the fact that the tensor
product of
two spin half fermionic fields gives you a spin one field. I know that is
very
vague but having this kind of consistency may open some mathemtical doors.

> ...


> Maybe a "revolution" is needed for placing fermions on a lattice. From
> what I understand (which is not much), there hasn't been too much luck
> doing so so far.

I always liked the Kogut-Susskind lattice formulation of fermions. The
gamma matrices can be transformed away leaving just a set of sign factors
associated with each lattice link (edge). The gauge components are
multiplied
by the factors in the action. These signs must have the property
that the product of any four around a square plaquette is minus one. It is
very elegant. If you are doing U(1) gauge theory you dont even have to put
the signs in by hand, just change the sign of the gauge action so that the
minimum action corresponds to the case where the product of gauge fields
round a plaquette is minus one.

The K-S formulation suffers from the infamous fermion doubling problem,
which
I think must be what you refer to above. If you are doing computational
lattice
gauge theory beyond the quenched approximation this is an issue, but so long
as we are dealing with toy models it is not such a showstopper.

The real issue here would be how can such a thing be applied to the general
simplex.


Eric A. Forgy

unread,
Aug 18, 2003, 5:31:20 PM8/18/03
to
dere...@yahoo.com (Derek Wise) wrote:

[snip]

[Note: "Tab" doesn't work too well for ascii art :)]

> ----------v1---------
> / | \
> / | \
> / | \
> | | |
> | | |
> ^ e1 V e2 ^ e3
> | | |
> | ===> | <=== |
> \ p1 | p2 /
> \ | /
> \ | /
> ---------v2---------
>
> This spacetime has:
> 2 vertices: V={v1,v2}
> 3 edges: E={ e1:v2-->v1 , e2:v1-->v2, e3:v2-->v1 }
> 2 plaquettes: P={ p1:e1==>e2* , p2:e3*==>e2 }
>
> So, the first integral we want to do is the partition function:
>
> int exp(-S(A)) DA.

I have another naive question (no, REALLY naive).

In this post you showed how the 3x3 matrix was degenerate so you went
to the space C^1/J, which ended up giving a 2x2 nongenerate matrix.
However, 2 just happens to correspond to the number of plaquettes. Why
can't we just consider something like a modified partition function

int exp(-S(F)) DF

?? If I'm not mistaken, S already IS a nondegenerate quadratic form in
F without having to do anything fancy. Why can't we just integrate
over configuration of F rather than A, where we need to cook up fancy
shmancy equivalence classes?

Eric

Thomas Larsson

unread,
Aug 18, 2003, 9:36:02 PM8/18/03
to sci-physic...@moderators.isc.org

ba...@galaxy.ucr.edu (John Baez) wrote in message news:<bh7d85$csh$1...@glue.ucr.edu>...

> In article <4b8cc0a6.03072...@posting.google.com>,
> Thomas Larsson <thomas....@hdd.se> wrote:
>
> >At sufficiently high temperature, any LGT is confining.
>
> Oh? How does this square with the widespread belief
> that above ~2 trillion kelvin, quantum chromodynamics
> *ceases* to be confining and we get a new phase called
> a quark-gluon plasma?
>
> Maybe you meant "coupling" instead of "temperature"?

I don't think so, but I'm probably using temperature in a non-physical sense.
I was referring to the well-known correspondence between a classical
statistical system with partition function Z = Tr exp(-H/k_B T) and
a quantum system (in Euclidean time) with Z = Tr exp(-S/hbar). So
classical thermal fluctuations correspond to quantum fluctuations at
zero temperature, and Boltzmann's constant x temperature ~ Planck's
constant.

With this identification, which is standard, high T means confinement. The
high T phase is disordered, so the correlation functions fall off rapidly,
and the area law certainly implies faster falloff than the perimeter law.
It is the area law which gives a linearly growing quark-quark potential,
i.e. confinement, whereas the perimeter law yields a constant potential.
An interesting point noted by Kogut is that it is the strong-coupling phase
that has a simple description in LGT, while weak coupling is complicated.

But I am really confused about deconfinement and physical temperature.
Unfortunately I never understood how to treat non-zero temperature
in a quantum context.

>
> I may have something more intelligent to say about your
> long and interesting post after I get this cleared up.

The really interesting part is probably the references, which are quite
nice introductions. The second review also contains some rather advanced
stuff such as numerical techniques and lattice fermions.

Matthew Nobes

unread,
Aug 19, 2003, 12:59:37 AM8/19/03
to
On Mon, 18 Aug 2003 21:29:53 +0000, Phil wrote:

>
> "Eric A. Forgy" <fo...@uiuc.edu> wrote in message
> news:3fa8470f.03072...@posting.google.com...

[snip]

>> Maybe a "revolution" is needed for placing fermions on a lattice. From
>> what I understand (which is not much), there hasn't been too much luck
>> doing so so far.

I'm not so sure this is true. There are two formulation that are
reasonably new, domain wall and overlap fermions. I don't know much about
them, but I do know that they have really nice chiral properties.

Domain wall fermions add a fifth dimension to the simulations, this
allows one to write down a lattice action that has the full chiral
symmetry.

However, both these formulations are extremely slow.



> I always liked the Kogut-Susskind lattice formulation of fermions. The
> gamma matrices can be transformed away leaving just a set of sign factors
> associated with each lattice link (edge). The gauge components are
> multiplied
> by the factors in the action. These signs must have the property
> that the product of any four around a square plaquette is minus one. It is
> very elegant. If you are doing U(1) gauge theory you dont even have to put
> the signs in by hand, just change the sign of the gauge action so that the
> minimum action corresponds to the case where the product of gauge fields
> round a plaquette is minus one.
>
> The K-S formulation suffers from the infamous fermion doubling problem,
> which
> I think must be what you refer to above. If you are doing computational
> lattice
> gauge theory beyond the quenched approximation this is an issue, but so long
> as we are dealing with toy models it is not such a showstopper.

Actually the #1 method for unquenched lattice simultions this days is an
improved version of the K-S action. KS (or staggered) fermions are great
becuase they're vastly faster than any other formulation. And they
preserve a remnant chiral symmetry, which protects you from additive mass
renormalization.

With staggered fermions you do the staggering you describe above to
go from 16 doubler flavours (actually now refered to as "tastes") down to
four. The fermion path integral then gives you a factor

det{M}

To do one flavour you just take the fourth root of this determinant (note,
many people don't like this step). The collaboration I'm invovled with is
working with a couple of other groups to get phenomenolgical results out
of this program. There are some details in

hep-lat/0304004

for those interested in state of the art large scale calculations.

Matthew Nobes

John Baez

unread,
Aug 20, 2003, 1:13:42 AM8/20/03
to
In article <c2e84040.0308...@posting.google.com>,
Derek Wise <dere...@yahoo.com> wrote:

>John Baez wrote:

>> I haven't heard from Derek for a long time, and I'm beginning
>> to fear that he's fallen into a black hole. This is a shockingly
>> common fate for students of quantum gravity. I'm sad to say
>> that two other grad students of mine have fallen into black holes
>> this summer.... I hope Derek isn't the third.

>No, no... nothing that serious. I only fell into a *wormhole* that
>led me to a region of the universe WITHOUT INTERNET CONNECTIONS!

What?!

>Can you imagine?

No! I was just in Malaysia, and now I'm at an internet cafe
in a hotel in Singapore where there's a big Asian studies
conference going on - including one talk on the role of internet
cafes in Malaysia. So, I'm curious where you could have been.
The northeastern United States, perhaps? I hear that backward
region has intermittent internet access these days.

>It took me a while to get back -- hence my email silence.

I hope your return didn't set up a timelike loop!

>It was actually a rather fortunate mishap, though,
>because the wormhole got me thinking about topological things that
>can happen with 2-complexes, which will be part of the content of
>this post!

Good. Luckily, despite their name, 2-complexes are not too complex -
at least for what we're doing. There are some famous unsolved
homtopy theory problems concerning 2-complexes, like the Andrews-Curtis
conjecture. As explained in "week23", these may have some relationship
to topological quantum field theory... but we don't need to worry about
them here, because electromagnetism only needs HOMOLOGY THEORY, which
is a lot easier than homotopy theory.

Anyway:

Good! But the "cut" is really just the edge e1, not an
actual "cut" - so the topology of this 2-complex is just
that of an annulus.

And that's good, because an annulus has a hole in it, and
holes tend to cause (co)homology groups to be nontrivial.

>Note there are 2 vertices,
>3 edges, and only one plaquette (p) -- there is no plaquette
>whose boundary is just e2. Now to see that this admits a
>closed but not exact connection, write A(ei)=Ai and then the
>condition dA=0 is just A3-A1-A2 = 0. This clearly has lots of
>solutions where A2 is not zero. But if A were exact then we
>would have A2=phi(t(e2))-phi(s(e2))=phi(v2)-phi(v2)=0, so A
>isn't exact.

Right! In physics lingo, these "closed but not exact"
connections correspond to the situation where you have an
vector potential flowing around a loop that has nonzero
curl over every plaquette.

Here the interesting loop is the edge e2, and you can have
A2 being nonzero even though the "curl" associated to the
plaquette p, namely dA(p) = A3-A1-A2, is zero.

>So, to do path integrals with the assumption that H^1 = 0, it seems
>we need to consider only 2-complexes that don't have "holes" of
>this kind.

Right!

>I.e. as long as we use 2-complexes without any "missing
>plaquettes," we are safe to use the space "connections mod gauge
>transformations" in doing path integrals instead of "connections mod
>closed connections" (which doesn't have as nice a ring to it).

Right! The concept of "missing plaquette" is a bit hard
to make precise, but you can always get H^1 = 0 by taking
your 2-complex and putting in a bunch of new plaquettes.

In this example, if we put a plaquette in the obvious hole
in your picture, the "curl" of A over this new plaquette
would be just A2... eliminating the problem we discussed
above.

Gotta run! More later.


John Baez

unread,
Aug 22, 2003, 10:57:17 PM8/22/03
to
In article <3fa8470f.03081...@posting.google.com>,
Eric A. Forgy <fo...@uiuc.edu> wrote:

>dere...@yahoo.com (Derek Wise) wrote:

>> Here's the magic formula [....]

>Please explain! That is too cool to let slip by :)

Yes! This formula is *very* cool, and *very* important
in quantum field theory! All of Feynman diagram theory starts here.

It's also easy to understand!

So, you shouldn't let it slip by!

Let me rewrite it in a less jargonesque form, so everyone
can admire it. Suppose we have an n x n symmetric real matrix M.
Then we can define a function on R^n as follows:

exp(-x.Mx)

where the Mx is what you get by multiplying the matrix M
and the column vector x, and the dot means "dot product".
When M is "positive definite", meaning that

x.Mx > 0

whenever the vector x is nonzero, the function

exp(-x.Mx)

is shaped like a Gaussian bump centered at the origin.

Now, we want to know it's integral over all of R^n!
And the answer is:

int_{R^n} exp(-x.Mx) dx = sqrt(pi^n/det(M))

We'll see why in a minute.

Derek rewrote this formula in terms of variables
relevant to our lattice gauge theory problem. The vector x
becomes the electromagnetic vector potential A, which
assigns a number A(e) to each edge of our lattice (or "2-complex").
The quantity x.Mx becomes the action S(A) of the electromagnetic field.
Finally, the number n is the number of edges in our "2-complex",
so we say R^E instead of R^n. With all these changes in
notation, Derek got:

>> int_{R^E} exp(-S(A)) dA = sqrt(pi^E/det(M)).

Now, let's see why it's true.

>I can see it is something like (probably exactly like) integrating a
>Gaussian:
>
>I = int_R exp(-x^2} dx
>
>I^2
>= int_R^2 exp(-x^2-y^2) dx dy
>= 2*pi*int_0^inf exp(-r^2) r dr
>= pi*int_0^inf exp(-u) du
>= pi

Yes, it all boils down to this! Some famous mathematician
said that a good mathematician is someone for whom this integral
is obvious. It was an incredibly stupid remark, but ever since
reading it I've made sure to remember how to do this integral. :-)

I also use this integral as a method to convince students that
double integrals in polar coordinates are cool! For some reason
kids learning calculus are scared of that particular topic: whenever
I first say "polar coordinates" there's a collective groan.
I respond by saying that Cartesian coordinates are for squares.

>Therefore
>
>I = sqrt(pi)
>
>Uh! Uh! I can almost see it! :)

Yes, once you know that

int_R exp(-x^2) = sqrt(pi),

it's easy to see the multivariable version

int_{R^n} exp(-x.Mx) dx = sqrt(pi^n/det(M))

>I suck in linear algebra, but I'm guess that if M is symmetric it can
>probably be decomposed into the product of a matrix and its transpose
>like
>
>M = T^t T

This is true if M is symmetric *and* positive definite.
It's sorta obvious that any M of this form is symmetric.
It's also positive definite if det(T) is nonzero,
since then we have

x.Mx = Tx.Tx > 0

But, it's also true that if M is symmetric and positive definite,
we can find a T with det(T) nonzero and M = T^t T.
I won't prove this here, but it's easy if you know how to
diagonalize symmetric matrices.

So anyway, let's assume M is symmetric and positive definite.
Then you can do this change of variables:

>Then if we define
>
>A' = T(A)
>
>then
>
>S(A) = A^t M A = A^t T^t T A = A'^t A'
>
>and
>
>exp(-S(A)) = exp(-A1'^2-A2'^2-...-AE'^2)
>
>which is just a product of E Gaussians.

Right!

And the integral of *that* is a product of E copies of sqrt(pi).
But, we have to worry about the Jacobian that shows up
when did the change of coordinates:

>I
>= int_{R^E} exp(-S(A)) dA
>= int_{R^E} exp(-\sum_i Ai'^2) 1/det(T) dA'
>= 1/det(T) prod_i [int_R exp(-Ai'^2) dAi']
>= 1/det(T) prod_i sqrt(pi)
>= 1/det(T) * sqrt(pi^E)
>
>However, since M = T^t T we have
>
>det(M) = det(T)^2
>
>so that
>
>I = sqrt(pi^E/det(M))

Voila! We're done! And I hope I've eased your worries
about dividing by det(M): it'll be a positive number (not zero)
whenever M is positive definite.

(Derek went on to do an integral in a case where M was
*not* positive definite and det(M) was zero, but it blew
up in his face and he quickly learned his lesson.)

So, brand this formula on your forehead right up there with d^2 = 0:

int_{R^n} exp(-x.Mx) dx = sqrt(pi^n/det(M))

John Baez

unread,
Aug 23, 2003, 12:16:12 AM8/23/03
to
In article <3fa8470f.03081...@posting.google.com>,
Eric A. Forgy <fo...@uiuc.edu> wrote:

>> So, the first integral we want to do is the partition function:
>>
>> int exp(-S(A)) DA.

>I have another naive question (no, REALLY naive).

Yeah, yeah.

>In this post you showed how the 3x3 matrix was degenerate so you went
>to the space C^1/J, which ended up giving a 2x2 nongenerate matrix.
>However, 2 just happens to correspond to the number of plaquettes. Why
>can't we just consider something like a modified partition function
>
>int exp(-S(F)) DF
>
>?? If I'm not mistaken, S already IS a nondegenerate quadratic form in
>F without having to do anything fancy. Why can't we just integrate
>over configuration of F rather than A, where we need to cook up fancy
>shmancy equivalence classes?

You're almost right: in electromagnetism uncoupled to matter,
with gauge group R, we *can* do the the path integral as an
integral over the space of electromagnetic fields (F's) instead
of over the space of vector potentials (A's). It's actually very
useful, and we should talk about it!

But, we need to be a bit careful about doing this if we want to get
the right answer. And, this way of doing things is not so good
if we couple electromagnetism to matter, since then the action
is easier to express in terms of A, rather than F. It's also a
little less than optimal if we want to use the gauge group U(1)
instead of R (these actually give slightly different theories
of electromagnetism in topologically funky situations). But
let's not worry about this fancy stuff yet.

In a previous post I asserted that in gauge theory we want
to do path integrals over the space of connections mod gauge
transformations.

For electromagnetism with gauge group R, this means doing
integrals over the space:

{1-forms A}/{1-forms A = df}

that is, 1-forms modulo exact 1-forms.

However, I also explained that these integrals will diverge
unless every closed 1-form is exact! Let me remind you why.

By definition, a closed 1-form A is one for which the electromagnetic
field F = dA is zero. This means the action S(A) is zero.
So, having closed but not exact 1-forms thus implies that the
action S(A) is not positive definite, even when you think of
it as a function on the space

{1-forms A}/{1-forms A = df}

And we've seen that this magic formula

int_{R^n} exp(-x.Mx) dx = sqrt(pi^E/det(M)).

blows up in our face when the matrix M isn't positive definite.

One way to avoid this problem is to assume every closed 1-form
is exact. This is an assumption on the topology of our
lattice (or more precisely, our "2-complex"). In technojargon,
we're assuming its FIRST COHOMOLOGY WITH REAL COEFFICIENTS
vanishes. For the lowbrows among us, this means it doesn't
have holes in it formed by missing plaquettes. Derek gave
an example of a hole like this a while back.

(When the first cohomology with real coefficients doesn't vanish,
we get something called the "Bohm-Aharonov effect". I don't
want to talk about this, but is fun to think about.)

Another way to avoid the problem is to do path integrals
on this smaller space:

{1-forms A}/{1-forms A with dA = 0}

that is, 1-forms modulo closed 1-forms. Observables
that are functions on this smaller space will typically
have well-defined path integrals (= "vacuum expectation values")

So far this is all just review! Now let me tackle the issue
you raised.

Notice that these two vector spaces are isomorphic:

{1-forms A}/{1-forms A with dA = 0} = {2-forms F with F = dA}

or in other words,

{1-forms}/{closed 1-forms} = {exact 2-forms}

(For some reason the "isomorphism" key on my keyboard is
stuck, so I'm having to use the equals sign.)

If you don't see why these vector spaces are isomorphic,
please think about it until you do!

What this means is that we CAN rewrite our path integral
as an integral over electromagnetic fields F - but only
over those that are of the form F = dA.

Now, things get simpler if we assume every closed 2-form
is exact, since then

{2-forms F with F = dA} = {2-forms F with dF = 0}

and when we're working on a 2-complex, *all* 2-forms F
have dF = 0, since there ain't no 3-forms!!!

This lets us rewrite our path integral as an integral over
the space of all 2-forms, which indeed makes it simpler to do,
as you've already pointed out in Derek's example.

But, assuming that every closed 2-form is exact is an
assumption about the topology of our 2-complex: namely,
that its SECOND COHOMOLOGY WITH REAL COEFFICIENTS vanishes.

To give you some sense of what this means, I can say that
this assumption rules out certain sorts of "2-dimensional holes".
For example, it's true if our 2-complex is shaped like a disk:

----------v1---------
/ | \
/ | \
/ | \
| | |
| | |
^ e1 V e2 ^ e3
| | |
| ===> | <=== |
\ p1 | p2 /
\ | /
\ | /
---------v2---------

or an annulus, or a Moebius strip, or a 37-holed torus with a
disk cut out of it, but not if it's shaped like a sphere, or
a torus, or a 37-holed torus... or the usual sort of 2-complex
you get when doing lattice gauge theory on a cubical lattice!

(A suspicious reader might think this whole thread is secretly
just a trick to teach physicists about cohomology theory. In
fact, that's only ONE of my fiendish goals.)

Anyway, we may enjoy doing some path integrals with or without
this extra assumption. Since this assumption rules out cases of
real physical interest, we don't want to commit ourselves to making
it. But, it's lots of fun to see what what happens when we can do
path integrals over the space of electromagnetic fields F instead
of vector potentials A mod those with dA = 0. Things get simpler!
So, keep around some examples of 2-complexes with vanishing
second cohomology with real coefficients, like this one:

----------v1---------
/ | \
/ | \
/ | \
| | |
| | |
^ e1 V e2 ^ e3
| | |
| ===> | <=== |
\ p1 | p2 /
\ | /
\ | /
---------v2---------

We will also want these if we ever get around to studying
nonabelian Yang-Mills theory on a 2-complex, since they're
a good way to understand "Lie-algebra-valued white noise".

[I'll cc this to Derek in case he fell into a wormhole
that doesn't admit usenet newsgroups.]


Derek Wise

unread,
Aug 26, 2003, 6:04:40 PM8/26/03
to sci-physic...@moderators.isc.org

Guage-equivalent gauge transformations
--------------------------------------

Today I want to expand on an idea I began earlier in this thread with
the following parenthetical comment:

> (Incidentally, John, isn't the fact that a scalar
> potential is only determined up to a constant really a primitive
> example of a "modification" -- a "gauge transformation between gauge
> transformations"? Oooooo -- more n-categories. Wheee!)

Now that we have formalized our spacetime model as a chain complex,
let's see what can be done toward making the above idea precise. As a
reminder,here's what our chain complex and cochain complex look like,
showing where each entity in the construction lives:


@_0 @_1 @_2
0 <-------- C_0 <-------- C_1 <-------- C_2

"Vertices" "Edges" "Plaquettes"

d_0 d_1 d_2
C^0 --------> C^1 --------> C^2 -------> 0

"Gauge "Connections" "Curvatures"
Trans."


My proposal in this post is that this diagram should actually be
extended. This may or may not be surprising to people who know some
algebraic topology -- I don't know. Since I haven't taken the
topology sequence at UCR yet, though,the discovery I'm about to tell
you of was enlightening to me.

The idea is as follows. The existence of gauge transformations mean
d_1 can't be one-to-one. That is, gauge equivalent connections give
rise to the same curvature, as we all know. Now go down a dimension.
d_0 is not one-to-one either, because potentials are only determined
up to an additive constant. This leads to my remarks in that earlier
post. If we view gauge transformations as *being* "0-forms" (as the
above diagram suggests) rather than as "the process of adding d of a
0-form to the connection", then we seem to have a notion of "gauge
transformations between gauge transformations." That is, we can view
two scalar potentials that differ by a constant as being somehow
"gauge equivalent."

I'm going to suggest calling these gauge transformations between gauge
transformations "Modifications." (I have reasons for doing this,
borrowed from category theory -- I don't want to go into that
connection just yet.)

I wanted to view "gauge transformations modulo modifications" in the
same way as "connections modulo gauge transformations," only a step
down the chain. Here's what I came up with:

Replace the "0" at the left end of the chain with something we'll call
"C_{-1}", consisting of formal linear combinations of connected
components of the lattice. Gauge transformatons, after all, are
really only defined up to a *local* constant, so modificaitions (the
dual space C^{-1}) need one degree of freedom for each connected
component of the lattice. Following our noses, we extend the complex
and its dual as shown below.


@_{-1} @_0 @_1 @_2
0 <------ C_{-1} <-------- C_0 <-------- C_1 <-------- C_2

"Connected "Vertices" "Edges" "Plaquettes"
Components"


d_{-1} d_0 d_1 d_2
C^{-1} --------> C^0 --------> C^1 --------> C^2 -------> 0

"Modifications" "Gauge "Connections" "Curvatures"
Trans."

With this picture, everything works out beautifully. The surprising
thing (at least to me) is that the natural notion of "the boundary of
a vertex" seems to be the connected component in which that vertex
lies. Let's see that this setup really does give us results we
expect:

Let m be a modification (m in C^{-1}) and v a vertex. Then:

(d_{-1} m)(v) := m(@_0 v) = m(connected component containing v)

So (d_{-1} m) really does assign the same value to each vertex in a
given connected component! We might call the action of a modification
on a gauge transformaion "picking a (local) ground potential."

Note also that "the boundary of a boundary is zero" still holds: in
this case, the equation @^2=0 just says "the source and target
vertices of any edge both lie in the same connected component."

Now we have really extended the diagram as far as possible to the
left. d_{-1} is one-to-one, so there is nothing left to mod out by.
There is no "gauge equivalence between modifications." To have those,
we'd have to bump the whole theory up a dimension and talk about
lattice 2-form electromagnetism (which we just might do eventually).

DeReK

PS: To send me spam, just use the address in the header.
Otherwise, send me mail by using my first name at the domain
math.ucr.edu.

Derek Wise

unread,
Aug 28, 2003, 2:05:24 PM8/28/03
to

fo...@uiuc.edu (Eric A. Forgy) wrote:

> [Note: "Tab" doesn't work too well for ascii art :)]

Yeah, I noticed this after I hit send. I made the mistake of using a
programming editor that automatically puts tabs in. Thanks for posting
the corrected picture:

> > ----------v1---------
> > / | \
> > / | \
> > / | \
> > | | |
> > | | |
> > ^ e1 V e2 ^ e3
> > | | |
> > | ===> | <=== |
> > \ p1 | p2 /
> > \ | /
> > \ | /
> > ---------v2---------

> > So, the first integral we want to do is the partition function:
> >
> > int exp(-S(A)) DA.

> I have another naive question (no, REALLY naive).
>
> In this post you showed how the 3x3 matrix was degenerate so you went
> to the space C^1/J, which ended up giving a 2x2 nongenerate matrix.
> However, 2 just happens to correspond to the number of plaquettes. Why
> can't we just consider something like a modified partition function
>
> int exp(-S(F)) DF
>
> ?? If I'm not mistaken, S already IS a nondegenerate quadratic form in
> F without having to do anything fancy. Why can't we just integrate
> over configuration of F rather than A, where we need to cook up fancy
> shmancy equivalence classes?

This is an interesting question. You are right: the action really
is nondegenerate as a function of F. Let me take a stab at explaining
why your idea doesn't quite work.

Naively, if we tried integrting over configurations of F (i.e. over
the space C^2) we might just let S=Sum(F^2) as before, and then we'd
get (using the 2-complex in the diagram above)

int exp(-S(F)) dF = int exp(-F1^2 - F2^2) dF1 dF2
= int exp( [F1 F2][ 1 0 ][F1]
[ 0 1 ][F2]) dF1 dF2
= sqrt(pi^2/det(id))
= pi.

Before we got pi/6, so we clearly don't get the same answer. But that's
not the worst of it. The big problem is that we get the same answer
for *any* 2-complex with only 2 plaquettes. More generally, for any 2-
complex with n plaquettes we get pi^(n/2). By integrating over the
space of possible curvatures instead of the space of all connections,
we eliminate all of the information about the geometry. Integrating
over C^1/J takes account of the geometric data -- in particular, the
process cares whether two plaquettes share an edge.

Of course, you might suggest that the action be modified in some
way. It would have to be modified in a way that took the geometry
into account, though, and this whould probably amount to a pretty
complicated set of rules. In the end, it would just be a complicated
way of thinking about integrating over C^1/J.

DeReK

Gerard Westendorp

unread,
Aug 28, 2003, 2:44:14 PM8/28/03
to sci-physic...@moderators.isc.org

Eric A. Forgy wrote:

[..]

> I have another naive question (no, REALLY naive).


>
> In this post you showed how the 3x3 matrix was degenerate so you went
> to the space C^1/J, which ended up giving a 2x2 nongenerate matrix.
> However, 2 just happens to correspond to the number of plaquettes. Why
> can't we just consider something like a modified partition function
>
> int exp(-S(F)) DF
>
> ?? If I'm not mistaken, S already IS a nondegenerate quadratic form in
> F without having to do anything fancy. Why can't we just integrate
> over configuration of F rather than A, where we need to cook up fancy
> shmancy equivalence classes?

I was thinking the same.
You can actually obtain int exp(-S(F)) DF by choosing a suitable basis
in Derek's method (If I understand it correctly).

Derek chose:

> C^1/J = span { (1,1,0),(1,-1,-2) }
> = { (x1+x2 , x1-x2 , -2x2) | x1,x2 in R } =~ R^2


But if you choose for example:

C^1/J = span { (2,1,-1),(-1,1,2) }
= { (2x1-x2 , x1+x2 , -x1+2x2) | x1,x2 in R } =~ R^2

You get:

S(A+J)=Sum_{A+J in C^1/J} F(A+J)^2
=(-A1-A2)^2 + (A2+A3)^2

=(-2x1+x2 -x1-x2)^2 + (x1-x2+ -x1+2x2)^2
=(x1)^2 + (x2)^2

=(x1 x2)[ 1 0 ] [x1]
[ 0 1 ] [x2]

x1 and x2 are now equal to F1 and F2, like we wanted.

The determinant is now 1.
Apparently, if I am not mistaken, the value of the path integral
depends on the choice of basis. Hopefully this choice-dependency
will cancel out when we calculate observables...

Gerard

John Baez

unread,
Aug 30, 2003, 8:59:17 PM8/30/03
to

In article <c2e84040.03081...@posting.google.com>,
Derek Wise <dere...@yahoo.com> wrote:

>Derek Wise had previously written:

>> (Incidentally, John, isn't the fact that a scalar
>> potential is only determined up to a constant really a primitive
>> example of a "modification" -- a "gauge transformation between gauge
>> transformations"? Oooooo -- more n-categories. Wheee!)

Yes! N-CATEGORIES rear their pretty head yet again!

>Now that we have formalized our spacetime model as a chain
>complex, let's see what can be done toward making the above

>idea precise. As a reminder, here's what our chain complex

>and cochain complex look like, showing where each entity in
>the construction lives:
>
>
> @_0 @_1 @_2
>0 <-------- C_0 <-------- C_1 <-------- C_2
>
> "Vertices" "Edges" "Plaquettes"
>
>
>
> d_0 d_1 d_2
> C^0 --------> C^1 --------> C^2 -------> 0
>
> "Gauge "Connections" "Curvatures"
> Trans."

Nice. And here's what this has to do with n-categories:

A chain complex is secretly just a specially simple kind
of n-category. So, whenever people do homology theory,
they are secretly doing a watered-down sort of n-category theory.

There are different ways to understand this, but here's
the way that may be easiest for you. Think about starting
with a discrete model of spacetime and gradually making
it more and more algebraic, as follows:

1) We start out by assuming spacetime was a 2-DIMENSIONAL
PIECEWISE-LINEAR CELL COMPLEX, or 2-COMPLEX for short.
This is a gadget built from 0-dimensional vertices,
1-dimensional edges and 2-dimensional polygon-shaped
"plaquettes".

2) It's not hard to take such a thing and turn it into a
2-CATEGORY in which the vertices, edges and plaquettes
give objects, morphisms and 2-morphisms. This 2-category
also has extra morphisms and 2-morphisms that are formal
"composites" of edges and plaquettes.

3) We can then throw in formal "inverses" for every morphism
and 2-morphism in this 2-category, and get a 2-GROUPOID.

I'd rather not give the details of steps 2) and 3),
since can probably imagine them, and some of the details
are in Pfeiffer's paper:

Higher gauge theory and a non-Abelian generalization of
2-form electrodynamics
http://www.arXiv.org/abs/hep-th/0304074

though he gets the 2-groupoid in one fell swoop rather
than the two-stage process I describe. Anyway, to study
electromagnetism, we take this process even further:

4) Starting from a 2-groupoid we can form a new one
by taking formal linear combinations of objects, morphisms
and 2-morphisms, and extending all the 2-groupoid operations
to be linear maps. This result is called a LINEAR 2-GROUPOID.

But, there's a theorem saying that a linear 2-groupoid is
secretly the same as a 3-TERM CHAIN COMPLEX:

@_1 @_2

C_0 <-------- C_1 <-------- C_2


Explaining this carefully would take a while, but you should
already be predisposed to believe it.

After all, we *already did* take our 2d complex, form linear
combinations of vertices edges and plaquettes, and make
the resulting vector spaces into a 3-term chain complex.
What I'm doing now is to take this construction and break it
down into lots of steps, so you can see the geometry slowly
morphing into algebra before your very eyes, like a caterpillar
metamorphosizing into a butterfly and then taking wing:

2d complex (geometry)
2-category (n-category theory)
2-groupoid (n-groupoid theory = homotopy theory)
3-term chain complex (homology theory)

When we do nonabelian gauge theory we will not go the whole
route - the last step, to homology theory, makes everything
"abelian".

There are some nuances I'm deliberately glossing over here
in order to convey the big picture. But as George Washington
said when his dad asked if he had chopped down the cherry tree,
"I cannot tell a lie". So, I feel the need to point out two
of these nuances.

(Washington was probably lying when he said that... unless of
course people are lying when they tell this story! In fact,
I think it's a completely fabricated story designed to promote
truthfulness. But never mind.)

1) What we did earlier in this thread was to take our
2d complex and form linear combinations of vertices, edges
and plaquettes. What I'm suggesting now is to first take
formal "composites" of edges and plaquettes to make a
2-category, then throw in formal "inverses" of these,
and finally take linear combinations of everything.
This gives us a *bigger* chain complex than the one we
had before! But it's "equivalent" to that other one,
so the difference is no big deal...

...once you understand what "equivalences" of chain complexes
are, how you work with them, and why these two are equivalent.

2) We have to equip an edge

e
x---------------y

with an orientation to think of them as a morphism e: x -> y
or e: y -> x. Similarly, we need to equip a plaquette with some
stuff to think of it as a 2-morphism from something to something.
It's probably best to do this "equipping" in *all possible ways*
when creating a 2-category from a 2d complex. If we do that,
each edge gives two different morphisms - which become inverses
when we make our 2-category into a 2-groupoid. Similarly,
each plaquette gives a bunch of 2-morphisms, pairs of which become
inverses when we make our 2-category into a 2-groupoid.

Now, here's a little puzzle. Given our chain complex

@_1 @_2

C_0 <-------- C_1 <-------- C_2

"Vertices" "Edges" "Plaquettes"

the first thing we did in electromagnetism was to form the
dual cochain complex:

d_0 d_1

C^0 --------> C^1 --------> C^2

"Gauge "Connections" "Curvatures"
Transformations"

My question is, what is this construction in the
language of 2-categories? We start with a linear
2-groupoid and we wind up with a new one which is
in some sense "upside-down". What are we doing
here?

It may help if you reread your notes on nonabelian
lattice gauge theory, where we had talked about a
categorical interpretation of connections and gauge
transformations. Don't be afraid to give a slightly
fuzzy answer followed by a few caveats - there are a
number of delicate points here that you may not have
the technical chops to get exactly right, but that's okay.

Anyway, I still haven't gotten to your actual QUESTION.
Since almost everyone is asleep by now, I'll give my reply
to that in a separate post.

John Baez

unread,
Aug 30, 2003, 11:27:16 PM8/30/03
to
Okay... having explained in a sketchy way how chain
complexes are secretly n-categories, I'll now address
Derek's proposal in a way that doesn't actually use much
about n-categories, just chain complexes. So if you fell
asleep for the last part, you can wake up now.

>Gauge-equivalent gauge transformations
>--------------------------------------

>Now that we have formalized our spacetime model as a chain complex,
>let's see what can be done toward making the above idea precise. As a
>reminder,here's what our chain complex and cochain complex look like,
>showing where each entity in the construction lives:


@_0 @_1 @_2
0 <-------- C_0 <-------- C_1 <-------- C_2

"Vertices" "Edges" "Plaquettes"

d_0 d_1 d_2
C^0 --------> C^1 --------> C^2 -------> 0

"Gauge "Connections" "Curvatures"
Trans."

>My proposal in this post is that this diagram should actually be
>extended. This may or may not be surprising to people who know some
>algebraic topology -- I don't know. Since I haven't taken the
>topology sequence at UCR yet, though,the discovery I'm about to tell
>you of was enlightening to me.

Your idea is not very shocking, though I'm not sure I ever
thought about it quite this way. It's one of the many games
people play with (co)chain complexes. But it's infinitely
better to make up these games yourself, motivated by good
physical reasons, than to wait for some guy teaching you
homology theory to throw them at you.

>If we view gauge transformations as *being* "0-forms" (as the
>above diagram suggests) rather than as "the process of adding d of a
>0-form to the connection", then we seem to have a notion of "gauge
>transformations between gauge transformations." That is, we can view
>two scalar potentials that differ by a constant as being somehow
>"gauge equivalent."

Okay. In fact, if we ever get around to seriously studying
BRST quantization of nonabelian gauge theories (which seems
damn unlikely at the rate this thread is going), we'll find
that this notion of "gauge-equivalent gauge transformations"
manifests itself in the concept of "ghosts of ghosts".

Pretty spooky, eh? If you ever see a ghost turn pale with
fright and let out a blood-curdling scream, it's probably
because it's seen a ghost of ghosts.

Heh. A quick Google search turns up phrases ranging from:

these identities induce reducibility identities among
the gauge transformations and are responsible for the
presence of ghosts of ghosts

to

GHOSTS OF GHOSTS: A PAINFUL SECRET REVEALED

But as far as *physics* goes, the basic idea is really
just what you're talking about: "gauge-equivalent gauge
transformations".

>I wanted to view "gauge transformations modulo modifications" in the
>same way as "connections modulo gauge transformations," only a step
>down the chain. Here's what I came up with:
>
>Replace the "0" at the left end of the chain with something we'll call
>"C_{-1}", consisting of formal linear combinations of connected
>components of the lattice. Gauge transformatons, after all, are
>really only defined up to a *local* constant, so modificaitions (the
>dual space C^{-1}) need one degree of freedom for each connected
>component of the lattice. Following our noses, we extend the complex
>and its dual as shown below.


@_{-1} @_0 @_1 @_2
0 <------ C_{-1} <-------- C_0 <-------- C_1 <-------- C_2

"Connected "Vertices" "Edges" "Plaquettes"
Components"


d_{-1} d_0 d_1 d_2
C^{-1} --------> C^0 --------> C^1 --------> C^2 -------> 0

"Modifications" "Gauge "Connections" "Curvatures"
Trans."


Nice! I think this is the math behind your construction:
given a chain complex

@_1 @_2
C_0 <-------- C_1 <-------- C_2 <------- ...

we can always create a slightly longer one

@_0 @_1 @_2
C_{-1} <-------- C_0 <-------- C_1 <-------- C_2 <-------- ....

by defining

C_{-1} = C_0/im(@_1) (called the "cokernel" of @_1)

and letting

@_0: C_0 -> C_0/im(@_1)

be the quotient map.

>With this picture, everything works out beautifully.

Yes!

>The surprising
>thing (at least to me) is that the natural notion of "the boundary of
>a vertex" seems to be the connected component in which that vertex
>lies.

This is vaguely familiar to me for reasons you'd probably
rather not hear. I'll just point out this: algebraists
define a category of simplices using linearly ordered sets
and order-preserving maps between them. The n-element set
corresponds to the (vertices of) the (n-1)-simplex, but this
means the empty set corresponds to a "-1-simplex". Topologists
usually freak out at the concept of a -1-dimensional simplex,
but in algebraic topology -1-chains are fairly natural, so
they are ultimately forced to accept them - usually with the
help of the buzzword "augmented", as in "augmented chain
complex". You seem to be reinventing this idea!


Eric A. Forgy

unread,
Sep 16, 2003, 4:18:28 PM9/16/03
to sci-physic...@moderators.isc.org

ba...@galaxy.ucr.edu (John Baez) wrote
> Derek Wise <dere...@yahoo.com> wrote:

> Okay. In fact, if we ever get around to seriously studying
> BRST quantization of nonabelian gauge theories (which seems
> damn unlikely at the rate this thread is going), we'll find
> that this notion of "gauge-equivalent gauge transformations"
> manifests itself in the concept of "ghosts of ghosts".

*cough*

What happened?! Just as things were starting to get interesting this
thread took a nose dive. I'm hoping that with this post, we can get
things kick started again. How about those path integrals?! Let's
compute something interesting! :)

Eric

Lubos Motl

unread,
Sep 19, 2003, 6:24:15 PM9/19/03
to
On Tue, 16 Sep 2003, Eric A. Forgy wrote:

> > manifests itself in the concept of "ghosts of ghosts".
> *cough*

Don't you like ghosts for ghosts? It is a kewl thing. Imagine a
generalization of the electromagnetic field that is a 4-form F(4), for
example. The Bianchi-identity part of the "Maxwell" equations,
dF(4)=0, implies that locally F(4) can be written as F(4)=d A(3) for some
3-form potential.

This A(3) potential admits a gauge symmetry that does not change F(4),
namely delta A(3) = d lambda(2), and there will be ghosts associated with
this 2-form lambda(2) of parameters. However it is not true that different
2-forms lambda(2) always lead to different delta A(3): if you change
lambda by

delta lambda(2) = d sigma(1)

for some 1-form sigma, delta A(3) won't change, because the difference is
proportional to d d sigma(1). Therefore not all the degrees of freedom
in the lambda transformations are independent, and you must take this
redundancy of redundancy into the account and introduce ghosts for ghosts,
uniquely associated with the sigma(1) components.

My example would also have to contain the redundancy of redundancy of
redundancy - one must note that sigma(1)'s that differ by d.tau(0) for a
0-form (scalar) tau lead to the same lambda(2) etc., and one must
introduce ghosts for ghosts for ghosts in this case.

While ghosts "eat" some physical degrees of freedom in the original
theory, ghosts for ghosts "eat" a part of the ability of ghosts to eat,
and so on. Therefore if the matter is thought to be "positive" (a positive
amount of physical degrees of freedom), ghosts are "negative", but ghosts
for ghosts are again positive and so on.

Best wishes
Lubos
______________________________________________________________________________
E-mail: lu...@matfyz.cz fax: +1-617/496-0110 Web: http://lumo.matfyz.cz/
phone: work: +1-617/496-8199 home: +1-617/868-4487
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Superstring/M-theory is the language in which God wrote the world.

Eric A. Forgy

unread,
Sep 23, 2003, 2:06:03 AM9/23/03
to sci-physic...@moderators.isc.org

Hi!

Lubos Motl <mo...@feynman.harvard.edu> wrote:
> On Tue, 16 Sep 2003, Eric A. Forgy wrote:
> > > manifests itself in the concept of "ghosts of ghosts".
> > *cough*
>
> Don't you like ghosts for ghosts? It is a kewl thing.

My *cough* was not meant as a criticism of ghosts of ghosts. I think
it is kind of quirky/neat :) Rather the *cough* was intended to try to
get the smart people to reinvigorate this cool thread I was so excited
about :) Anything anyone has to say about this subject is of interest
to me.

By the way, in the vein of "higher" Maxwell's equations. It is kind of
neat to think of "lower" Maxwell's equations, where you have a 0-form
"connection" A and 1-form "curvature" F = dA with equations of motion

1.) dF = 0
2.) del F = 0

This gives the scalar wave equation. So what would parallel transport
be like for a 0-form connection? Hopping from disconnected components,
i.e. Derek's (-1)-chains? :)

Best regards,
Eric

Lubos Motl

unread,
Sep 24, 2003, 2:43:38 PM9/24/03
to sci-physic...@ucsd.edu

On Tue, 23 Sep 2003, Eric A. Forgy wrote:

> 1.) dF = 0
> 2.) del F = 0

Note that the gauge invariance is

delta A = d lambda

where lambda is a (-1)-form. Minus one forms have zero components (because
(d choose -1) equals zero), so there is essentially no gauge invariance.

The Wilson one-dimensional loops are replaced by Wilson zero-dimensional
loops/points, whose value is the scalar field at the given point A itself.
In this sense, I believe that you are right that the parallel transport is
a discretized jump from one point to another.

Finally, exactly this 0-form potential is what is coupled to D-instantons
in string theory. D-instantons are D(-1)-branes, a lower-by-one
generalization of D-particles (they are points even in spacetime), and the
scalar field that couples to it (in type IIB string theory) is called the
axion. The p-dimensional potentials generalize to arbitrary values of "p"
because D-branes of all dimensions (between -1 and 25, if I include
bosonic string theory) exist in string theory.

In the type IIB string theory the (Ramond-Ramond scalar called the) axion
A (the 0-form that roughly satisfies the massless Klein-Gordon equation as
you deduced correctly) is combined with the dilaton, another scalar field,
into a complex scalar field "tau". There is a SL(2,Z) symmetry called
S-duality acting on tau via

tau -> (a tau + b) / (c tau + d)

where ((a,b),(c,d)) is a 2x2 matrix such that ad-bc=1 - i.e. an element of
the modular group.

The Ramond-Ramond fields are always Abelian, and therefore their
generalization of the counting of forms is pretty easy. A much more
difficult question is to construct a non-Abelian generalization of the
p-form gauge fields.

Best wishes
Lubos
______________________________________________________________________________
E-mail: lu...@matfyz.cz fax: +1-617/496-0110 Web: http://lumo.matfyz.cz/
phone: work: +1-617/496-8199 home: +1-617/868-4487
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Superstring/M-theory is the language in which God wrote the world.

O Udajovi a Kusajovi máme jen kusé údaje: zbyly kusy Udaje a údy Kusaje.


Igor Khavkine

unread,
Sep 25, 2003, 10:28:18 PM9/25/03
to
fo...@uiuc.edu (Eric A. Forgy) wrote in message
news:<3fa8470f.03092...@posting.google.com>...

> My *cough* was not meant as a criticism of ghosts of ghosts. I think
> it is kind of quirky/neat :) Rather the *cough* was intended to try to
> get the smart people to reinvigorate this cool thread I was so excited
> about :) Anything anyone has to say about this subject is of interest
> to me.

Well, I'm not sure I have much to say, but I do have questions. While
reading this thread I've been trying to reconsile what I've learned
about smooth differential geometry with its discrete cousin being used
here.

One thing I'm puzzled by is the discrete analog of tensor products for
vectors or wedge products for forms. I would imagine that the tensor
product of two vectors would correspond to the product of two edges
resulting in a face that is adjacent to both of them. But what if
there is more than one face that satisfies this requirement? In
particular, what is the product of an edge with itself, all the faces
it is adjacent to?

One of the most important features of differential geometry is its
coordinate independence (although many things are initially defined
using coordinate charts). But in the discrete analog there seems to be
an implicit choice of "coordinates", namely the way one arranges the
edges and vertices. Is this where the vector space of chain complex
comes in with the requirement that all results should be independet of
the choice of basis in this vector space?

In general, where can I learn more about the discrete "differential"
geometry? Hopefully including but not restricted to the limiting
process by which we recover the usual smooth differential geometry.

Thanks.

Igor

SM Presnell

unread,
Oct 6, 2003, 4:57:37 PM10/6/03
to


On Fri, 19 Sep 2003, Lubos Motl wrote:

> While ghosts "eat" some physical degrees of freedom in the original
> theory, ghosts for ghosts "eat" a part of the ability of ghosts to eat,
> and so on. Therefore if the matter is thought to be "positive" (a positive
> amount of physical degrees of freedom), ghosts are "negative", but ghosts
> for ghosts are again positive and so on.

Ah, it's like French pronunciation (as we were taught it in school,
anyway).

"Never pronounce the last letter,
unless it's a vowel,
unless it's an e,
unless it has an accent over it."


---------------
Stuart Presnell


John Baez

unread,
Oct 7, 2003, 4:59:49 PM10/7/03
to
In article <Pine.SOL.4.05.103092...@eis.bris.ac.uk>,
SM Presnell <cs...@bris.ac.uk> wrote:

... unless you don't give a damn, unless your French teacher
says you'll get an F if you don't shape up.

That's a very appealing analogy. Here's another one.

Say you have a weird country where the provinces overlap,
so that some people live in just one province, but some
live in two provinces, and some live in three, and so on.

And say you wanted to count the number of people
in this country. You could do it as follows:

First count the number of people in each province and add up
the results. This is an overestimate because a bunch of
people live in two or more provinces, and you've counted them
more than once. But, we'd be done if nobody lived in more
than one province!

Then, for each pair of provinces, find out the number of people
who live in both. Sum up the results for all pairs of provinces.
Then subtract this from the overestimate you had before!
Now you have an *underestimate*, because a bunch of people live
in three or more provinces, and you've subtracted them off more
than once. But, we'd be done if nobody lived in more than two
provinces!

Then, for each triple of provinces, find out the number of people
who live in all three. Sum up the results for all triples of
provinces. Then add this to the underestimate you had before!
Now you have an *overestimate*, because a bunch of people live
in four or more provinces, and you've add them back in more than
once. But, we'd be done if nobody lived in more than three provinces.

And so on. If nobody lives in more than n provinces, you'll
get the right answer eventually.

This trick is called the Principle of Inclusion-Exclusion, and
sometimes it's actually a really good way to count things. See:

http://mathworld.wolfram.com/Inclusion-ExclusionPrinciple.html

for more.

If you throw some extra math at this idea, you can make up
chain complex whose Euler characteristic is the number of
people, counted as an alternating sum of dimensions of
homology groups. And this sort of chain complex also shows
up when you study ghosts, ghosts-of-ghosts, and so on.

It's all about subtracting stuff - but subtracting too much,
so you gotta subtract from the stuff you subtracted, and so on.

One could write a very nice essay about this idea! But alas,
I've got to go teach "advanced calculus" now, so I'm afraid
*this* version of the essay will stop right here.


Gerard Westendorp

unread,
Oct 8, 2003, 2:18:02 AM10/8/03
to
Igor Khavkine wrote:

[..]


> One thing I'm puzzled by is the discrete analog of tensor products for
> vectors or wedge products for forms. I would imagine that the tensor
> product of two vectors would correspond to the product of two edges
> resulting in a face that is adjacent to both of them. But what if
> there is more than one face that satisfies this requirement? In
> particular, what is the product of an edge with itself, all the faces
> it is adjacent to?


I am not sure there is such a nice analog of vector products in the
discrete world. There are nice analogs of grad, curl, and div. The
energy is also transparently defined, it is just the sum of the energies
of each imaginary discrete component located on the edges and faces. But
what if you had some E-fields defined on edges, and some
B-fields defined on faces, and you want to see where the Poynting
vector EXB hangs out?

First, edges do not really correspond to vectors, but to components
of vectors. In a cubic lattice, the edges in the x,y and z direction
represent only the respective x,y and z components of the E-field
vectors. In a non-cubic lattice, it can get more complicated because
the x,y and z components have to be divided among more or less than
3 edeges per grid point.

The Poynting vector should tell us how the energy flows out of a
small volume in space. In a cubic lattice, each grid point can be
assigned 3 edges and 3 faces which can be thought of as the 3
components of its E and B fields. The flow of energy outof these
6 components would have to be the divergence of a vector, which
would be a difference between 2 x-components, 2 y-components, and 2
z-components, or 6 components. But the exact flow of energy outof the
6 energy carrying components seem to be more complex than this.

In a smooth world, we can simplify things and define a meaningful
Poynting vector, but in a discrete world, I don't see how to do it.
So more generally, I don't see how to do an external product of
vectors. An internal product seems OK though.

Gerard

Lubos Motl

unread,
Oct 9, 2003, 10:14:05 PM10/9/03
to

>> SM Presnell:

>> "Never pronounce the last letter,
>> unless it's a vowel,
>> unless it's an e,
>> unless it has an accent over it."
>
> John Baez:

> Say you have a weird country where the provinces overlap,
> so that some people live in just one province, but some
> live in two provinces, and some live in three, and so on.

Great examples. Another one: start with a solid body whose topology is
trivial - such as a tetrahedron, dodecahedron etc. - and let's try to
regularize the number of points on its surface.

Count the number of its faces F. Well, that's too many because they
intersect in the edges whose number is E. Edges are also objects, and
because each of them has been counted twice, we must subtract them, giving
F-E. However the vertices have been entirely subtracted - the
contribution from the faces canceled the contribution from the edges.
Because the vertices are points, too, we must add them back again (V of
them), giving F-E+V. (In higher dimensions, we would have to continue,
with alternating signs.) The result turns out to be

F - E + V = 2

Well, you can check that for the cube; tetrahedron; dodecahedron;
octahedron, but also for any irregular solid body, the result will be
always 2. (You find the Platonic polyhedra to come in pairs, with the same
"E" but F,V interchanged. They are the dual polyhedra.)

If you add g handles (for example, a discretized version of the doughnut
has g=1), you obtain 2-2g. This number "2" or "2-2g" is called the "Euler
characteristic" of the manifold, it is usually denoted chi, and it can be
interpreted as the regularized "number of points" on the manifold.

For example, the Cartesian product of two manifolds, M1 x M2, has
chi (M1 x M2) = chi(M1) x chi(M2)

just like the analogy with the "number of points" would lead us to
believe. Also, chi can be calculated by gluing the manifold together from
pieces, much like if we count its points.

Under suitable conditions, the functional integral
int D f(x)

over the functions f(x) on the manifold has a dimensionality which is like
the dimensionality of f(x) at one point to the chi-th power. Note that
this, again, trivially holds if the manifold is a set of N points (then
chi=N).

Wow, now I see that John's post continued in a similar fashion...

Squark

unread,
Oct 10, 2003, 6:20:40 PM10/10/03
to
Lubos Motl <mo...@feynman.harvard.edu> wrote in message
news:<Pine.LNX.4.31.031007...@feynman.harvard.edu>...

> Under suitable conditions, the functional integral
> int D f(x)
>
> over the functions f(x) on the manifold has a dimensionality which is like
> the dimensionality of f(x) at one point to the chi-th power. Note that
> this, again, trivially holds if the manifold is a set of N points (then
> chi=N).

Sounds neat. How do you show it?

Best regards,
Squark

------------------------------------------------------------------

Write to me using the following e-mail:
Skvark_N...@excite.exe
(just spell the particle name correctly and change the
extension in the obvious way)

Gerard Westendorp

unread,
Oct 14, 2003, 3:57:01 PM10/14/03
to

Lubos Motl wrote:

[..]

> Great examples. Another one: start with a solid body whose topology is
> trivial - such as a tetrahedron, dodecahedron etc. - and let's try to
> regularize the number of points on its surface.
>
> Count the number of its faces F. Well, that's too many because they
> intersect in the edges whose number is E. Edges are also objects, and
> because each of them has been counted twice, we must subtract them,
> giving F-E. However the vertices have been entirely subtracted -
> the contribution from the faces canceled the contribution from the

> edges.Because the vertices are points, too, we must add them back


> again (V of them), giving F-E+V. (In higher dimensions, we would
> have to continue, with alternating signs.) The result turns out to
> be
>
> F - E + V = 2
>

What exactly is "regularize" here? I'll try to guess it:
The formula could be generalized as:

fF - eE + vV = P

Here f is the number of pixels per face, e the number of pixels per
edge, and v the number of pixels per vertex. And P is the total number
of points.
One way to "regularize" is to set f=e=v=1.

Another way to look at it is to interpret "point" or "pixel" as
"information". The reasoning is based on:

information = N_variables - N_equations

Because equations can de dependent, they are themselves subject
to constraining equations.

Specifically, in a network with F faces, we could assign F mesh
currents to each face, giving F containers of information. But
if we say the currents on the Edges are given, we have E
constraining equations (each current is sum of the mesh currents of
the meshes that the edge separates) , so we now only have F-E
containers of information. But then we could say the edge currents
must satisfy Kirchhoff's current law at each vertex, so the
information is F-E-V. But
as Derek pointed out (btw Derek, where are you?) even the V
equations are not independent. Because each current must have a
beginning and an end, each disconnected component (C) of the network
leads to a loss of one independent Vertex equation. So:

Information in mesh currents
given the edge currents
given the edge currents obey Kirchhoffs current law
given each currents has an end vertex and a beginning vertex

= F-E+V-C

We could call this number the Face degrees of freedom (F_dof). And we
could call
E_dof = E-V+C
the edge degrees of freedom (E_dof).

The edge degrees of freedom can be interpreted as the number of
independent currents we can specify in a circuit while satisfying
Kirchhoffs current law. For example a simple loop has E_dof=1, an
open chain has E_dof=0.

The F_dof formula looks like the Euler characteristic. We could
interpret it as the number of independent mesh currents we can
specify on a network if the edge currents are specified. But it
appears to be slightly more tricky.

A surface with Euler characteristic 2 has F_dof = Euler-1 = 1

So according to this we can specify a mesh current pattern that
will not alter any edge currents. This seems easy: it is a
constant mesh current on each Face.

OK, but now for a Torus, which should have F_dof=-1. I would expect
that we can now no longer do this. But no, a constant mesh current
on each Face still leaves all edge currents unchanged. So if our
F_dof formula is to hold, we must find at least 3 ways of specifying
edge currents that cannot be expressed as sums of mesh currents on
faces.

I remember hearing something once about the fact that you
"cannot comb the hair of a sphere". The edge currents that
cannot be expressed as sums of mesh current on faces seem to
be intuitively related to this. If you try to comb a nice
pattern on the hairs on a sphere, you either get a point
where the hair- pattern abruptly ends, or a kind of
"North Pole".

On a discretize sphere, you cannot figure out a pattern of
edge currents that satisfies Kirchhoffs current law (div I =0)
and covers each edge only once. But on a Torus, you can have
hairs combed around the big circles, and combed around the small
circles. These 2 patterns together form a pattern of edge closed
currents that cover each edge only once. And, they are both patterns
that cannot be expressed as the sum of of mesh current on faces.
A third independent loop that cannot be expressed as the sum of of
mesh current on faces seems to be a loop around the hole.

I feel as if I am reinventing some well known stuff here...

Gerard


Tim S

unread,
Nov 14, 2003, 2:27:54 PM11/14/03
to
on 15/8/03 10:45 pm, Derek Wise at dere...@yahoo.com wrote:

> [...about path integrals on a lattice]

OK, I want to learn more about this stuff, so I'm going to try to revive
this thread, which has gotta be more productive than the Superstring Wars
being fought over in some other threads.

Since the Wiz seems to be too busy to drive this according to whatever
teaching plan he had in mind, I'll simply seize the wheel and go off down
whatever side roads look as if they might have some pretty scenery, and see
if we get where we were going. But any guidance would be appreciated because
I don't really have a roadmap, just one of those large-scale atlases that
tell you there are cities and forests and things ("L^2(X) City",
"Peter-Weyl Woods", "Here Be Spin Foams", etc) but not how to get to them
or through them.

Before I set off, I'll just tidy up a loose end in the post I'm purporting
to be replying to.

Derek said, speaking of norms of vectors in the quotient space of
connections by gauge transformations:

> What does this look like? It's just the Euclidean distance from the
> subspace J to the point A. This leads us to an interesting point.
> R^n is not just a vector space but an inner product space, which
> means we have a sensible notion of orthogonality. It's not too hard
> to see (by drawing a picture if necessary) that the norm on C^1/J
> just assigns to A+J the norm of the orthogonal projection of A onto
> the orthogonal complement of J, J^{perp}. What does this mean?
> It means that using the natural Lebesgue measure on the Euclidean
> space C^1/J = R^3/R is just the same as my earlier suggestion of
> gauge fixing by integrating over the orthogonal complement of the
> subspace determined by gauge transformations!
>
> We know that C^1/J is R^3/R, which is isomorpic to R^2. We have
> only to determine which R^2 subspace it is. Using the above
> observation about orthogonality, we can pick any basis of J^{perp}.
> Thus,
>
> C^1/J = span { (1,1,0),(1,-1,-2) }
> = { (x1+x2 , x1-x2 , -2x2) | x1,x2 in R } =~ R^2

But I'm pretty sure that to pick up the correct measure from R^m, we need
our basis vectors to be not only mutually orthogonal but also normalised,
otherwise we end up with a gratuitous scale factor. If we normalise, then
we get a correction to Derek's remarks below:

> Now we can pull out our magic formula again and do the path
> integral:
>
>
> / / / pi^2 \
> | | exp(-S(A)) dx1 dx2 = sqrt | ------ | .
> / / \ det(M) /
>
> This time, though, the determinant of M is not zero but 36, so we
> get pi/6.

Actually, the determinant is 3, so we get pi/sqrt(3).

Now off onto our first side road. I want to understand the classical theory
on this lattice a bit better before we start thinking about quantisation
(or, with Gerald Westendorp, thermodynamics).

I've taken an irrational dislike to the way we've written our action:

Sum_p F(p)^2

where p are our plaquettes and F is our curvature dA.

Aesthetically, I'd like one of those factors of F(p) to be something sort
of dual to F rather than F itself. Let's call it *F, because it vaguely
reminds us of something by that name that we saw somewhere else a while
ago. These dual quantities live on the Poincare dual lattice. We get this
by replacing each 2-cell by a 0-cell placed at its centre, each 1-cell by a
1-cell crossing it, and each 0-cell by a 2-cell surrounding it -- or, on an
arbtrary n-complex, replacing each p-cell by an n-p cell. Then we can also
dualise our p-forms to get n-p forms on the dual lattice. On a square
lattice, it looks a bit like this:

(Our original lattice has edges delineated with | and - and vertices marked
+, while the dual lattice has edges marked x and vertices marked O.)

| x | x |
| x | x |
| x | x |
-----+-------------------------+-----------------------------+--------
| x | x |
| x | x |
| x This is | x |
| x Plaquette n | x |
| x | x |
| x | x |
| x | x |
xxxxx|xxxxxxxxxxxOxxxxxxxxxxxxx|xxxxxxxxxxxxxxOxxxxxxxxxxxxxx|xxxxxxxx
| x \ | x |
| x \ | x |
| x Vertex dual| x Edge dual |
| x to | x to E |
| x Plaquette n| Edge E x <- |
| x | | x |
| x | \ / x |
-----+-------------------------+-----------------------------+--------
| x |\ x |
| x | \ x |
| x | This vertex x |
| x | has as dual x |
| x | the x-and-O x |
| x | plaquette x |
| x | around it x |
xxxxx|xxxxxxxxxxxOxxxxxxxxxxxxx|xxxxxxxxxxxxxxOxxxxxxxxxxxxxx|xxxxxxxx
| x | x |
| x | x |
| x | x |
| x | x |
| x | x |
| x | x |
| x | x |
-----+-------------------------+-----------------------------+--------
| x | x |
| x | x |
| x | x |


The dual of a 2-form on plaquette p is the 0 form on the 0-cell dual to p.
At this stage of the proceedings, I shall be exercising an inexcusably
cavalier disregard of issues concerning orientations and scale factors,
blithely accepting that this may cause me terrible headaches later on with
left/right-hand screw rules and awful confusion about what units things are
measured in. I shall simply assume that in the cochain bases we are using,
fields and their duals have numerically equal values at corresponding
places. So F(p) is numerically equal to *F(*p).

We can now form the product (*F(p) /\ F(p)). It lives on some sort of local
product lattice between the 2-cells of the original lattice and the
corresponding 0-cells of its dual. However, we won't be needing to think
about this for a bit, because we're going to be taking a shortcut across a
muddy field, by skipping over the bit where we integrate the Lagrangian and
calculate the variation in the action, and going straight to the
(classical) field equations, so we can try to hook up to something
familiar.

There are two field equations:

(1) dF = 0

This is equivalent to div B = 0 and curl E + dB/dt = 0 and isn't very
exciting, particularly in this case where F is a 2-form on a 2-lattice and
hence dF is trivially zero. The other equation is more interesting:

(2) d*F = *j

where j is our source current density.

Let's think about what this means.

F is the field, a 2-form, defined on each plaquette. Hence *F is a 0-form
(i.e. a function) defined on the corresponding vertices of the dual
lattice. Hence d*F is a 1-form, defined on the dual edges. Our convention
on scale factors means that d*F on a dual edge is basically the difference
in F between the neighbouring plaquettes whose corresponding dual vertices
it links, where, if I'd been sensible, I'd know what the orientation of the
dual edge was, and hence which way the difference was taken.

Now, *j is also a 1-form, the charge/current density 1-form. We're more
familiar with its dual j, however, which lives on the edges of our
_original_ lattice. If the edge is spacelike, then j is the current density
flowing along that direction (or the total current -- on a lattice, it's
not necessarily all that easy to tell the difference, particularly with my
simple-minded convention for distance scales). If the edge is timelike, then
j is the charge density at that point in space.

So, our field equation tells us that the difference between the field in
neighbouring cells is equal to the magnitude of the current/charge density
on the edge separating them. 'Current is the source of curvature.'

(If the current is zero everywhere, the curvature -- i.e. the
electromagnetic field -- will be constant over spacetime. What this
constant is depends on boundary conditions. We'd normally set it to zero on
'physical' grounds).

-------------------------------------------------------

The relation with the familiar Maxwell equations becomes a bit more
perspicuous if we go up a dimension, e.g. to a cubic lattice.

F is still a 2-form, but *F is now a 1-form, defined on the dual edges
passing through the plaquettes where F is defined. d*F is now a 2-form on
the dual plaquettes.

Here's a chunk of the lattice and the dual lattice, rendered in ASCII-art
with outstanding skill...

+
/ |
/ |
/ |
/ |
+ |
Oxx|xxxxxxxxxxxO
x | | x
+--x--|--------+--x-----------+
/ | x | /| x /|
/ | x | / | x / |
/ | x | / | x / |
/ | x | / | x / |
+--------------+--------------+ |
| | x | | x | |
| | Oxx|xxxxxxxxxxxO | |
| +-----|--------+-----|--------+
| / | / | /
| / | / | /
| / | / | /
| / | / | /
+--------------+--------------+

We're only showing one plaquette of the dual lattice, along with its edges
(marked x) and its vertices (marked O). Now, consider those plaquettes of
the original lattice through which one of the shown dual edges passes.
These have a value of F defined on them. A numerically equal value of *F is
defined on the corresponding dual edges.

Now, d*F is basically the sum of the value of *F along these edges (taking
into account their orientation...), considered as a 2-form on the dual
plaquette which I've drawn.

*j is therefore also a 2-form defined on this dual plaquette, and hence j
is a 1-form on the edge which passes through this plaquette. If that edge
is timelike, then the dual plaquette is purely spacelike, and j is the
charge density (per unit area) on that edge (or, equivalently, the total
charge lying in the dual plaquette, owing to our convention about length
scales). If the edge is spacelike, then the dual plaquette has a pair of
spacelike edges and a pair of timelike edges, and j is the current density
flowing along the edge, in units of charge per unit distance per unit time
(or the total current, blah blah scale factors blah blah).

So if the edge is spacelike, d*F = *j says that the sum of *F around the
edges of the dual plaquette is equal to the current flowing through the
plaquette. This is basically

/int B.dl = I
(or curl B = j, take your pick).

If the edge is timelike, d*F = *j says that the sum of *F around a loop
surrounding the 'point' in 'space' that the edge 'worldline' passes through
is equal to the charge contained in the loop, which is analogous to

/int E.dS = Q

(or div E = \rho),

except of course one dimension lower (we are integrating around a loop
rather than over a surface).

I'm not going to attempt 4-dimensional ASCII art, but obviously in 4-D
everything looks more familiar still.

-------------------------------------------------------

Next, I think we should demonstrate conservation of charge.

Since d^2 is always zero, we have d*j = dd*F = 0. Let's see how this works
pictorially.

+--------------+--------------+
| | |
| | |
| | |
| Oxxxx>xxxxx>xxxO |
| x | x |
| ^ v x |
| x | v |
+-------x-->---+--<----x------+
| x | x |
| ^ ^ v |
| x | x |
| Oxxx<xxxxxx<xxxO |
| | |
| | |
| | |
+--------------+--------------+

This time, orientations really do matter, so I've chosen a screw rule: to
get from the orientation of an original edge to the orientation of the dual
edge, rotate 90 degrees anti-clockwise; and I've attempted to indicate with
arrows the orientation of the edges. The dual orientations that I've chosen
take us clockwise around the dual plaquette (this makes things easy to add
up). The screw rule (in reverse) then implies that the original edges are
all oriented toward the central vertex.

d*F is simply the difference between value of F in the plaquettes at the
ends of the dual edges. dd*F is the result of summing these up, which
obviously comes to zero since each plaquette is added once and subtracted
once.

Hence the sum of *j along the edges of the dual plaquette also comes to
zero.

Hence the sum of j along the four edges pointing toward the central vertex
equals zero.

If all four of those edges are spacelike, this means that the sum of the
currents arriving at the vertex is zero.

If (say) the horizontal direction is spacelike and the vertical direction
is timelike, then we have incoming currents along the two horizontal edges,
and the two vertical edges give 'before' and 'after' values of the charge
density, except that the 'after' one is _minus_ the charge density. Adding
all four up to get zero tells us that as the clock ticks through the
vertex, its charge increases by the amount of current that arrives there
during the tick (modulo choice of units to measure distances, times, rates
and densities).

These are of course two expressions of charge conservation.

-------------------------------------------------------

Now, lets make waves!


We have d*F = *j.

Hence *d*F = j;

Hence d*d*F = dj.

Suppose j = 0.

Then d*d*F = 0.

What does this mean?

Well, in 2-D, we already know that d*F is a (dual) 1-form which lies on the
dual edges connecting plaquettes, and gives the difference between the
values in those plaquettes.

Hence *d*F is a 1-form which lies on the ordinary edges which act as faces
between plaquettes, and gives the difference between the field values in
those plaquettes (for a suitable choice of orientation).

So d*d*F is an ordinary 2-form associated with the plaquettes of the
original lattice. But what actually is it?


+--------------+--------------+--------------+
| | | |
| A | B | C |
| | | |
| OxxxxxxxxxxxxxxOxxxxxxxxxxxxxxO |
| x | x | x |
| x | ^ | x |
| x | x | x |
+-------x------+---->--x--->--+-------x------|
| x | x | x |
| D x ^ E ^ v F x |
| x | x | x |
| Oxxx<xxxxx<xxxxOxxx>xxxxxx>xxxO |
| x | x | x |
| x ^ v ^ x |
| x | x | x |
+-------x------+----<--x---<--+-------x------+
| x | x | x |
| G x | H v | J x |
| x | x | x |
| OxxxxxxxxxxxxxxOxxxxxxxxxxxxxxO |
| | | |
| | | |
| | | |
+--------------+--------------+--------------+


I've labelled each plaquette with a letter. We're interested in the value
of d*d*F on the central plaquette.

This is equal to the sum of *d*F along its edges, which we have supplied
with a clockwise orientation. Our screw rule then means that the
corresponding dual edges, holding the values of d*F, all point outward. d*F
on any such edge is the value of *F at the end vertex minus the value of *F
at the source vertex, which is the value of F at the end plaquette minus
the value at the source plaquette.

d*d*F =


{ F(D) - F(E) } + { F(F) - F(E) } + { F(B) - F(E) } + { F(H) - F(E) }

= { F(D) - 2 F(E) + F(F) } + { F(B) - 2 F(E) + F(H) }

This first term is the discretised second derivative of F along the
horizontal direction and the second term is the discretised second
derivative of F along the vertical direction. So we basically have the 2-D
Laplacian, and d*d*F = 0 gives the Laplace equation:

Del^2 F = 0.

If one of the directions is timelike rather than spacelike, things work a
bit differently -- I think we need to use a different convention for the
screw rule in the dualising operation *. Anyway, d*d* gives the 2-D
d'Alembertian, so d*d*F = 0 gives the discretised wave equation for F.

We need some boundary conditions. In the case of the wave equation, I think
that the values of F along the initial row of plaquettes, and of d*F along
the future edges of those plaquettes, give good data for the future
development of the waves.

HOWEVER, the discretised wave equation tends to have horrible numerical
instabilities, so I'm not going to try giving an example. If we discretise
space but not time, however, we get the standard equation for waves on a
lattice, e.g. sound waves on a crystal lattice.

(You may be wondering why I'm babbling about the wave equation when our
field equation d*F = j already indicates that the vacuum field is constant.
The reason is that in two dimensions there isn't much wiggle room, so the
field equation is very constraining and the wave equation is satisfied in a
trivial way. In higher dimensions this is no longer true. We can see this
by counting the number of constraints and the number of degrees of freedom.

In two dimensions, F is a 2-form, hence has 2 x 1/2 = 1 independent
component (1 degree of freedom).

*F is a 0-form.
Hence d*F is a 1-form, with one degree of freedom.

So d*F = *j imposes one constraint, which leaves us with no degrees of
freedom, completely fixing our field (up to a constant).

In three dimensions, F is still a 2-form, so has 3 x 2/2 = 3 independent
components (3 degrees of freedom).

*F is a 1-form.
d*F is a 2-form, hence has 3 x 2/2 = 3 independent components.

So d*F = *j imposes three constraints on our 3 degrees of freedom, again
completely fixing the field.

But in 4 dimensions:

F is a 2-form, so has 4 x 3/2 = 6 independent components.

*F is a 2-form, so d*F is a 3-form, with 4 x 3 x 2/(1 x 2 x 3) = 4
independent components.

d*F = *j imposes 4 constraints on our 6 degrees of freedom, leaving 2
degrees dependent on boundary conditions and free to propagate across
spacetime.

This calculation was simplified by the fact that our gauge group is one
dimensional, but the same idea applies in other similar theories. For
instance, in n-dimensional gravity, the Riemann tensor has n^2(n^2-1)/12
independent components, while the Einstein tensor has n(n+1)/2.

Thus in 2 D, we have 1 degree of freedom, and the field equation imposes 3
constraints (i.e. not only does curvature no propagate, but the Einstein
equation severely constrains the possible matter content!)

In 3 D we have 6 degrees of freedom, and the field equation imposes 6
constraints, so the field is completely determined by the matter content.

But in 4 D we have 20 degrees of freedom, but the field equation only fixes
10 of them, so we have 10 free to wiggle as gravitational waves.)

Next time, I think I'll ramble a little about scalar matter fields, U(1)
reprentations and the action principle. Then, depending on how I feel and
what I've been working on in the meantime, I might talk about Brownian
motion and heat baths, or I might talk about quantisation.

Hmm...

That went better than I expected.

Wonder what disasters lie in store as we head off into the back country?

Tim

Urs Schreiber

unread,
Nov 14, 2003, 3:58:53 PM11/14/03
to
"Tim S" <T...@timsilverman.demon.co.uk> schrieb im Newsbeitrag
news:BBDADB7F...@timsilverman.demon.co.uk...

> on 15/8/03 10:45 pm, Derek Wise at dere...@yahoo.com wrote:


I'll take the opportunity of Tim's very nice discussion to mention some
related things.


> I've taken an irrational dislike to the way we've written our action:
>
> Sum_p F(p)^2
>
> where p are our plaquettes and F is our curvature dA.
>
> Aesthetically, I'd like one of those factors of F(p) to be something sort
> of dual to F rather than F itself. Let's call it *F,

As it turns out, it is in fact possible to construct the true Hodge star
operator on (hyper-)cubic lattices with arbitrary metric. Using this it is
possible to write the EM action on hypercubic lattices in the familiar form.

Eric Forgy and I have been working on this lately and because it is of some
interest with respect to this thread I note that a pre-preprint version of
our notes can be found at http://www-stud.uni-essen.de/~sb0264/p4.pdf . On
pp. 30 of this text the Hodge star operator on discrete spaces is discussed
in general and its specific realization on (hyper-)cubic graphs is analyzed
in detail. Volume forms, integration, and everything needed to write down
the EM action on the lattice with arbitrary (and in particular non-flat)
background metric as familiar from the continuum is discussed in section 4.

This approach does not refer to the dual lattice for defining Hodge duals
but instead proceeds in the spirit of noncommutative geometry by
representing every object in discrete differential geometry as an operator
on a suitable Hilbert space. An inner product (in the pseudo-Riemannian
case) or scalar product (in the Riemannian case) on this Hilbert space then
induces a notion of metric on the discrete space and the Hodge dual can be
formulated in terms of operator products and operator adjoints.

> Now, *j is also a 1-form, the charge/current density 1-form. We're more
> familiar with its dual j, however, which lives on the edges of our
> _original_ lattice. If the edge is spacelike, then j is the current
density
> flowing along that direction (or the total current -- on a lattice, it's
> not necessarily all that easy to tell the difference, particularly with my
> simple-minded convention for distance scales). If the edge is timelike,
then
> j is the charge density at that point in space.

Implicitly the discussion has focused on using timelike and spacelike edges.
Why not use lightlike ones?

We were kind of surprised to find that the non-commutative-geometry-like
formulation of differential geometry on discrete spaces singles out a metric
on (hyper-)cubic lattices with respect to which _all_ edges are _lightlike_
(pp. 46). A (hyper-)cubic complex with such a metric we call an "n-diamond
complex" and it turns out that such diamonds enjoy all sorts of nice
properties. See below.

> Next, I think we should demonstrate conservation of charge.

When a fully "mimetic" formulation of discrete geometry is available, all
results such as this charge conservation automatically carry over from the
continuum. By "mimetic" one means (e.g.
http://www.math.unm.edu/~stanly/mimetic.html
http://math.unm.edu/~stanly/mimetic/contmech.html) a discrete framework in
which all the familiar algebraic relations such as Stokes' theorem and
various identities involving the Hodge star hold without lattice
corrections.

> Now, lets make waves!
[...]


> HOWEVER, the discretised wave equation tends to have horrible numerical
> instabilities, so I'm not going to try giving an example.

I am not an expert on the general case of waves on discrete spaces, but I
think that on diamond complexes, where all edges are lightlike, the
discretized wave equation actually gives the exact result (along the
preferred lattice directions), since the waves can propagate happily along
the lightlike edges. The Laplace-Beltrami operator for n-diamond complexes
is worked out on p. 45 and it clearly has all such waves in its kernel.

Furthermore hypercubic complexes have the advantage that the (discrete)
exterior bundle over them does decompose as the product of two (discrete)
spinor bundles just as in the continuum case (p 47). (This is not true for
non-hypercubic discrete spaces.) Therefore this formalism also allows to
write down Dirac-Kaehler actions and equations (pp. 56). (Some of the
notation regarding spinors and forms is not explained in the above file but
in appendix A of http://xxx.lanl.gov/abs/hep-th/0311064 .)


Eric A. Forgy

unread,
Nov 15, 2003, 7:09:56 PM11/15/03
to
Tim S <T...@timsilverman.demon.co.uk> wrote

> on 15/8/03 10:45 pm, Derek Wise at dere...@yahoo.com wrote:
>
> > [...about path integrals on a lattice]
>
> OK, I want to learn more about this stuff, so I'm going to try to revive
> this thread, which has gotta be more productive than the Superstring Wars
> being fought over in some other threads.

Hi Tim! :)

Good luck! I also tried without much luck to revive this thread.
Perhaps we can keep it alive just amongst ourselves. Urs let the cat
out of the bag that we have been secretly working together to conquer
the world... err I mean develop a algebraic/combinatorial version of
differential geometry on what may be called a "directed n-graph",
which smells a lot like an n-category. An especially nice case of
which might be called an "n-diamond complex."

With any luck Derek and Professor Baez may make an appearance here and
there if we can make things interesting enough :)

> Now off onto our first side road. I want to understand the classical theory
> on this lattice a bit better before we start thinking about quantisation
> (or, with Gerald Westendorp, thermodynamics).
>
> I've taken an irrational dislike to the way we've written our action:
>
> Sum_p F(p)^2
>
> where p are our plaquettes and F is our curvature dA.
>
> Aesthetically, I'd like one of those factors of F(p) to be something sort
> of dual to F rather than F itself. Let's call it *F, because it vaguely
> reminds us of something by that name that we saw somewhere else a while
> ago.

Of course this is the same objection I had when the thread first began
:) However, Professor Baez managed to convince me that to worry about
this Hodge podge business would only add unnecessary complications to
the formulations. The proof of the complications is in that Herculean
effort it must have taken you to draw those ascii diagrams. Try doing
that with a simplicial complex! :)
There is a lot of material you can learn with this unphysical metric
and then we can always go back and insert a more physically motivated
metric later on.

> These dual quantities live on the Poincare dual lattice. We get this
> by replacing each 2-cell by a 0-cell placed at its centre, each 1-cell by a
> 1-cell crossing it, and each 0-cell by a 2-cell surrounding it -- or, on an
> arbtrary n-complex, replacing each p-cell by an n-p cell. Then we can also
> dualise our p-forms to get n-p forms on the dual lattice. On a square
> lattice, it looks a bit like this:

This approach to Hodge duality is well known at least in my field
(applied computational EM). When you first start playing around with
it, results come cheaply enough that you feel you are really onto
something. I gaurantee that it is just an illusion. When you dig
deeper, this Poincare dual <-> Hodge dual is going to get you into
trouble. My vote is to follow the principle of "KISS" :)

I decided not to delete this because it hurts to think of the effort
to draw it :)

> We can now form the product (*F(p) /\ F(p)).

Uh huh. Good luck making this product well defined :) Let's not build
a house of cards :)

> It lives on some sort of local
> product lattice between the 2-cells of the original lattice and the
> corresponding 0-cells of its dual. However, we won't be needing to think
> about this for a bit, because we're going to be taking a shortcut across a
> muddy field,

Danger :) Is your product going to end up being associative? Graded
commutative? Does it satisfy the graded Leibniz rule?

> by skipping over the bit where we integrate the Lagrangian and
> calculate the variation in the action, and going straight to the
> (classical) field equations, so we can try to hook up to something
> familiar.

If you REALLY want some sense of duality that is not going to cause
too much of a headache, I suggest taking a look at what I outlined in
a previous post in this thread

http://groups.google.com/groups?q=g:thl3725983303d&dq=&hl=en&lr=&ie=UTF-8&selm=3fa8470f.0307311003.12f45e69%40posting.google.com&rnum=11

Then again, we might just be better off forgetting it like the master
says :) At least for the time being.

[snip]



> Next time, I think I'll ramble a little about scalar matter fields, U(1)
> reprentations and the action principle. Then, depending on how I feel and
> what I've been working on in the meantime, I might talk about Brownian
> motion and heat baths, or I might talk about quantisation.

Like I said in another prior post in this thread, I am always very
happy to hear what anyone has to say about this subject that is so
close to my heart. I'm looking forward to learning more about whatever
you have time to say :)

Best regards,
Eric

Eric A. Forgy

unread,
Nov 17, 2003, 3:41:32 PM11/17/03
to

"Urs Schreiber" <Urs.Sc...@uni-essen.de> wrote:
> "Tim S" <T...@timsilverman.demon.co.uk> schrieb:

> > on 15/8/03 10:45 pm, Derek Wise at dere...@yahoo.com wrote:
>
> I'll take the opportunity of Tim's very nice discussion to mention some
> related things.

Urs has been itching to discuss this stuff out in the open for months
now. I guess he just couldn't wait for me to finish the public version
of our notes :)

Since he already released the flood gates, we might as well go all the
way :)

> As it turns out, it is in fact possible to construct the true Hodge star
> operator on (hyper-)cubic lattices with arbitrary metric. Using this it is
> possible to write the EM action on hypercubic lattices in the familiar form.

Ack! :)

By now it is well known (I suppose) that you can construct a nice
graded differential algebra on a directed graph with a map

d: Omega^p -> Omega^{p+1}

satisfying

d^2 = 0

and

d(AB) = (dA)B + (-1)^|A| A(dB).

See for example (one of my all time favorite papers):

http://www.arxiv.org/abs/gr-qc/9808023
Discrete Riemannian Geometry
A. Dimakis, F. Muller-Hoissen

The way I like to think about this is to relate it to elementary
algebraic topology where the graded algebra is the space of co-chains
and the product is cup product (defined on the cochain level) and of
course d being the coboundary map.

However, if you really think about it, cup product defined on the
cochain level is a lot like concatenation of paths (or their duals!).

Therefore, I prefer to call the graded vector space

P = (+)_r P_r

the space of paths instead of chains. An element of the dual space is
then a "copath", i.e a linear functional on the space of paths in a
directed n-graph. So that if A is a p-"copath" then

<A|i0...ip>

is the value of A evaluated on the path |i0...ip>.

But what IS the space of paths?!?!

Oh yeah, and what is a directed n-graph?!?!

Ok, let me back up...

A directed n-graph is about the only thing it could be. First of all,
what is an ordinary directed graph?

A directed graph G (for my purposes) consists of countable sets G_0
and G_1 together with maps

s_1,t_1:G_1->G_0.

The elements of G_0 are our "nodes" and the elements of G_1 are our
"directed edges". Given a directed edge |g_1> in G_1, then

s|g_1>

is the "source node" of |g_1> and

t|g_1>

is the "target node" of |g_1>.

The space of 0-paths (nodes) is that consisting of formal linear
combinations of nodes and the space of 1-paths (directed edges) is
that consisting of formal linear combinations of directed edges.

Ok. Now a direct n-graph G (for my purposes) consists of n+1 countable
sets

G_p, p in {0,...,n}

together with maps

s_p,t_p: G_p->G_{p-1}

satisfying

s_{p-1} t_p = t_{p-1} s_p,

which essentially says the maps commute and we truncate this by
setting s_0|g_0> = t_0|g_0> = 0, the empty set, for all |g_0> in G_0.

The elements of G_0 and G_1 may again be thought of as nodes and
directed edges, respectively, while the elements of G_p will in
general be referred to as "simple p-paths."

With the intentional flavor of n-catefories, a simple p-path |g_p> may
be thought of as a "p-dimensional arrow" from the source s_p|g_p> to
the target t_p|g_p>. We will eventually interpret the arrows as
providing a sense of time, as in the causal poset approach of Sorkin.

We then define the space of p-paths in the obvious way.

A "boundary map" @_p:P_p(G)->P_{p-1}(G) may be defined on the space of
p-paths on a directed n-graph G via

@_p := t_p + (-1)^p s_p.

The boundary map is not, in general, nilpotent, because

@_{p-1} @_p = t_{p-1} t_p - s_{p-1} s_p

or if we drop the subscripts

@^2 = t^2 - s^2.

Because "the boundary of a boundary is zero" is one of the most
beautiful things in mathematical physics, I can't help but to think it
holds a special place. For this reason, I think of the space of
p-paths as being "pre-geometric." It seems that anything that is
geometrically or topologically realizable should satisfy @^2 = 0.

Therefore, we let the space of a "p-chains" C_p(G) on a directed
n-graph to be that subspace of "p-paths" for which @^2 IS zero, i.e.

|c_p> in C_p(G) <==> @^2|c_p> = 0.

For example, consider the 2-paths

|ijk> --> s|ijk> := |ij>, t|ijk> := |jk>
|jki> --> s|jki> := |jk>, t|jki> := |ki>
|kij> --> s|kij> := |ki>, t|kij> := |ij>

and 1-paths

|ij> --> s|ij> := |i>, t|ij> := |j>
|jk> --> s|jk> := |j>, t|jk> := |k>
|ki> --> s|ki> := |k>, t|ki> := |i>.

None of the individual 2-paths are 2-chains because they do not
satisfy @^2 = 0, however, the linear combination

|triangle> := |ijk> + |jki> + |kij>

IS a 2-chain, i.e. it is geometrically realizable. Check it :)

With this definition, clearly the boundary map is closed on the space
of chains, i.e.

@: C_p(G) -> C_{p-1}(G).

Phew!!

Now we have our vector space of paths and a boundary map. The dual
space is the space of copaths.

The product on the dual space can now be defined as

<AB|g_{p+q}> := <A|s^q(g_{p+q})><B|t^p(g_{p+q}>

where A in P^p and B in P^q.

[Note: Urs and I are still arm wrestling over the notation for all
this stuff :)]

The coboundary is defined as usual as

<dA|S> = <A|@S>,

which is supposed to make you think of Stokes theorem :)

=====================================================
Exercise: Prove that

d(AB) = (dA)B + (-1)^|A| A(dB)

for all p-paths A and q-paths B even though d^2 != 0.
=====================================================

As you may guess, the space of p-cochains C^p is the subspace of
p-copaths for which d^2 = 0, i.e.

A in C^p <==> d^2A = 0.

Phew^2 !!

If you are interested enough to have read this far, then it might be
worth pointing out that all this is done on an ARBITRARY DIRECTED
N-GRAPH! :)

> Eric Forgy and I have been working on this lately and because it is of some
> interest with respect to this thread I note that a pre-preprint version of
> our notes can be found at http://www-stud.uni-essen.de/~sb0264/p4.pdf .

Note that the "pre"-preprint was not a typographical error. It really
is a rough sketch even before the preprint is done :)

> On pp. 30 of this text the Hodge star operator on discrete spaces is discussed
> in general and its specific realization on (hyper-)cubic graphs is analyzed
> in detail.

Ack!! :)

As I said, all of this works for arbitrary graphs. Pictorially, you
might imagine a bunch of paths as "weaving" its way through some
abstract space. However, something especially nice happens if you have
precisely n edges directed both away from each node and toward each
node. Kind of like a conservation of flux lines (which reminds me a
little bit about the U(1) version of loop QG, i.e. loop
electromagnetism :)). THIS is what Urs means by a "hyper cubic graph"!
At each node, it may topological look like a "cubic grid", but it need
not look cubic globally. Very much like a manifold looks locally like
R^n. The particular class of n-graphs for which we worked out the
Hodge star looks locally cubic, at least topologically. The use of the
word "cubic" is also something we arm wrestle over because "cube"
makes me think of a geometrical cube, when what he really means is a
topological cube, which doesn't necessarily have to look geometrically
anything like a cube. Oh well :) The class of directed n-graphs for
which each node has n edges directed both toward and away from each
node serves as a discrete (not "topologically discrete") kind of
manifold.

Hmm... since I brought up "topological discreteness" I might as well
mention that the topologies we put on these directed n-graphs is NOT
the discrete topology. It is more like the topology of a poset. In
fact, the directed n-graphs are very much like (and possible can be
special cases of) Sorkin's finitary topological spaces. Then again,
since we have finitary topologies that are not discrete, this implies
that the topologies we deal with are not Hausdorff!! :) I've mentioned
this a few times over the years and I still find it kind of
interesting. The idea of non-Hausdorff topologies in physics also
comes up in the idea of charges/fermions actually being the
identification of two distinct points in spacetime (making spacetime
non-Hausdorff). I forget who came up with that idea, but Professor
Baez recently brought that up again :) This kind of thing seems
natural on a directed n-graph. Anyway :)

> Volume forms, integration, and everything needed to write down
> the EM action on the lattice with arbitrary (and in particular non-flat)
> background metric as familiar from the continuum is discussed in section 4.

I'll just clarify again that all of these "mimetic" analogues of
continuum objects are defined on quite general directed n-graphs that
can have exotic topologies.

> This approach does not refer to the dual lattice for defining Hodge duals
> but instead proceeds in the spirit of noncommutative geometry by
> representing every object in discrete differential geometry as an operator
> on a suitable Hilbert space. An inner product (in the pseudo-Riemannian
> case) or scalar product (in the Riemannian case) on this Hilbert space then
> induces a notion of metric on the discrete space and the Hodge dual can be
> formulated in terms of operator products and operator adjoints.

Beauty is in the eye of the beholder, but I personally find this to be
very beautiful :)

> Implicitly the discussion has focused on using timelike and spacelike edges.
> Why not use lightlike ones?
>
> We were kind of surprised to find that the non-commutative-geometry-like
> formulation of differential geometry on discrete spaces singles out a metric
> on (hyper-)cubic lattices with respect to which _all_ edges are _lightlike_
> (pp. 46). A (hyper-)cubic complex with such a metric we call an "n-diamond
> complex" and it turns out that such diamonds enjoy all sorts of nice
> properties. See below.

Again, an n-diamond complex is kind of like a discrete version of
Minkowski space. Not to say that it is Lorentz invariant, but rather
that it has a GLOBAL coordinate patch. Everything in the pre-preprint
can be extended to more exotic spaces that look locally like an
n-diamond complex.

> > Next, I think we should demonstrate conservation of charge.
>
> When a fully "mimetic" formulation of discrete geometry is available, all
> results such as this charge conservation automatically carry over from the
> continuum. By "mimetic" one means (e.g.
> http://www.math.unm.edu/~stanly/mimetic.html
> http://math.unm.edu/~stanly/mimetic/contmech.html) a discrete framework in
> which all the familiar algebraic relations such as Stokes' theorem and
> various identities involving the Hodge star hold without lattice
> corrections.

Beauty :)

> > Now, lets make waves!
> [...]
> > HOWEVER, the discretised wave equation tends to have horrible numerical
> > instabilities, so I'm not going to try giving an example.
>
> I am not an expert on the general case of waves on discrete spaces, but I
> think that on diamond complexes, where all edges are lightlike, the
> discretized wave equation actually gives the exact result (along the
> preferred lattice directions), since the waves can propagate happily along
> the lightlike edges.

Right, which means that there WILL be numerical "dispersion error"
along directions that are not necessarily aligned with edges. I just
wanted to clarify that it is only exact for edges that are laid out in
a "straight" line, which won't really happen in general. This
corresponds to the fact that there is a "magic" discretization of the
(1+1)d wave equation that is exact, but there is no such "magic"
formulation for (p+1)d wave equations for p > 1.

> The Laplace-Beltrami operator for n-diamond complexes
> is worked out on p. 45 and it clearly has all such waves in its kernel.
>
> Furthermore hypercubic complexes have the advantage that the (discrete)
> exterior bundle over them does decompose as the product of two (discrete)
> spinor bundles just as in the continuum case (p 47). (This is not true for
> non-hypercubic discrete spaces.) Therefore this formalism also allows to
> write down Dirac-Kaehler actions and equations (pp. 56). (Some of the
> notation regarding spinors and forms is not explained in the above file but
> in appendix A of http://xxx.lanl.gov/abs/hep-th/0311064 .)

Yep. It has been great fun finally working with someone (Urs) who has
some chance to make my half baked ideas precise :) I'm working hard on
the "pre"print sequel to the "pre pre" print :) Who knows if it will
ever make its way to an actual "print"? :)

Best regards,
Eric

Urs Schreiber

unread,
Nov 21, 2003, 5:35:55 PM11/21/03
to
Tim S <T...@timsilverman.demon.co.uk> wrote

> > I've taken an irrational dislike to the way we've written our action:
> >
> > Sum_p F(p)^2
> >
> > where p are our plaquettes and F is our curvature dA.
> >
> > Aesthetically, I'd like one of those factors of F(p) to be something
sort
> > of dual to F rather than F itself. Let's call it *F, because it vaguely
> > reminds us of something by that name that we saw somewhere else a while
> > ago.

We haven't yet had any discussion on what the correct generalization of A to
the lattice should be. It's unnatural for it to be Lie-algebra valued on the
lattice. We should rather have a holonomy-valued 1-form H. Because of the
fact that 1-forms and 0-forms don't commute on the lattice it turns out that
H^2 seems to be the correct generalization of gauge curvature to the
lattice, i.e.

d_A A -> {H,H} = 2 H^2 ,

This is a cute little formula, I think, which nicely makes use of
the special non-commutativity on the lattice. I have sketched
some more details on pp. 57 of
http://www-stud.uni-essen.de/~sb0264/p4a.pdf . I'd be interested
to hear any comments on this.

Ralph Hartley

unread,
Nov 22, 2003, 4:57:54 AM11/22/03
to
Eric A. Forgy wrote:

> A directed n-graph is about the only thing it could be. First of all,
> what is an ordinary directed graph?
>
> A directed graph G (for my purposes) consists of countable sets G_0
> and G_1 together with maps
>
> s_1,t_1:G_1->G_0.
>
> The elements of G_0 are our "nodes" and the elements of G_1 are our
> "directed edges".

OK. This works fine because a line (a 1 edge) naturally has two ends.

> Ok. Now a direct n-graph G (for my purposes) consists of n+1 countable
> sets
>
> G_p, p in {0,...,n}
>
> together with maps
>
> s_p,t_p: G_p->G_{p-1}

This is clunky as a definition because the bounds of a plane figure doesn't
naturally have two parts.

This definition makes a 2-edge look like a biangle
____
/ \
/ \
* *
\ /
\____/

when a triangle, pentagon or even a uniangle
____
/ \
/ \
* |
\ /
\____/

should be just as good.

The "boundary" map for a 2-edge should return a cyclic sequence. That is a
tuple (s_1,s_2,s_3,...,s_p) (for some p) where the order counts upto cyclic
rearangements, so that (s_2,s_3,...,s_p,s_1) counts as the same order.

The constraint is that t s_i = s s_{(i+1) mod p} for all i<=p.

I think it is enough to just have circles because you can't have an
*directed* Mobius strip, since for 2-D direction corresponds to orientation.

It gets a bit more complicated when you get to 3-edges. The bits of
boundary needed to be layed out on a sphere.

> With the intentional flavor of n-catefories, a simple p-path |g_p> may
> be thought of as a "p-dimensional arrow" from the source s_p|g_p> to
> the target t_p|g_p>. We will eventually interpret the arrows as
> providing a sense of time, as in the causal poset approach of Sorkin.

Lubos might think that this is a bit forced, and I would have to agree (for
a change).

Ralph Hartley

Urs Schreiber

unread,
Nov 22, 2003, 1:14:08 PM11/22/03
to
"Ralph Hartley" <har...@aic.nrl.navy.mil> schrieb im Newsbeitrag
news:bpldtf$oit$2...@ra.nrl.navy.mil...
> Eric A. Forgy wrote:

> > Ok. Now a direct n-graph G (for my purposes) consists of n+1 countable
> > sets
> >
> > G_p, p in {0,...,n}
> >
> > together with maps
> >
> > s_p,t_p: G_p->G_{p-1}
>
> This is clunky as a definition because the bounds of a plane figure
doesn't
> naturally have two parts.

[...]


> The "boundary" map for a 2-edge should return a cyclic sequence. That is a
> tuple (s_1,s_2,s_3,...,s_p) (for some p) where the order counts upto
cyclic
> rearangements, so that (s_2,s_3,...,s_p,s_1) counts as the same order.

What you describe here is the boundary map on abstract simplices. Eric is
actually trying to go one step beyond simplices to a place where there is no
(discrete) geometry yet but just abstract "paths". That's why he was talking
about "pre-geometry". The idea is to generalize the boundary map to a setup
which has even less structure than the space of simplices.

The reasoning is roughly as follows: The starting point for all discrete
thinking is a space of vertices P0 = span{(i)} togethether with a space of
directed edges P1=span{(i,j)} between vertices. You can think of this as
the space of 0-simplices and the space of 1-simplices. But with the
resonable requirement that each edge of a p-simplex should be in P1
this does in general _not_ give you p-simplices for p>1. For instance
think of the case where P0 and P1 describe a branched polymer with a
few cross-connections.

Hence without requiring special properties from P1 the only sensible way to
get higher objects is to concatenate elements in P1 to get p-paths. For
instance a 2-path is just a path consisting of 2 steps, which we denote by
the ordered tuple (i,j,k), where (i,j) and (j,k) must be in P1. Only if it
happens that all of (j,i), (k,j), (i,k), (k,i) are also elements of P1 can
we form a 2-simplex

[i,j,k] = (i,j,k) - (i,k,j) - (j,i,k) + (j,k,i) + (k,i,j) - (k,j,i)

as a special sort of 2-paths. Hence the space of simplices (if it exists for
a given P1) is a subspace of the space of paths. We would like to have a
generalization of the boundary map @ to the space of paths such that the
space of simplices (if it exists for a given P1) is in the kernel of @^2.

The naive adaption of the boundary map to the space of paths does not work,
because we cannot in general remove intermediate edges from a p-path and get
a (p-1)-path. For instance if (i,j,k) is an element of P2 (i,k) need not be
an element of P1. The only deletions of edges that are guaranteed to produce
an existing path are those of the first and of the last edge.

This is the rationale behind defining on the space P = sum_p Pp of all paths
the boundary map as

@ = t + (-1)^p s,

where t removes the first and s removes the last edge from any path.

As Eric has discussed, geometrical objects can now be characterized as being
elements in the kernel of @^2. All other objects are "pre-geometrical". In
particular if P1 is such that it admits simplices, then @ reduces to the
usual boundary map on such simplices.

> I think it is enough to just have circles

What do you mean by this?

> > With the intentional flavor of n-catefories, a simple p-path |g_p> may
> > be thought of as a "p-dimensional arrow" from the source s_p|g_p> to
> > the target t_p|g_p>. We will eventually interpret the arrows as
> > providing a sense of time, as in the causal poset approach of Sorkin.
>
> Lubos might think that this is a bit forced, and I would have to agree
(for
> a change).

I think that when you see examples you'll agree that it is not forced.
Consider the following: In
http://groups.google.de/groups?selm=c2e84040.0307171507.38af0a5a%40posting.google.com
Derek Wise wrote:

>>>>>>>>>
We've got our tiny little sample spacetime, which looks like this:

e4 e7
v4-->---v5--<---v6
| | |
| | |
e1^ f1 ^e3 f2 ^e6
| | |
| | |
v1--->--v2--->--v3
e2 e5

To get F=dA, what we would really like to say is:

dA:C_2 --> R
f |---> A(t(f)) - A(s(t)).

in direct analogy with how we defined d(phi). The problem here is
that, as we have defined faces so far, they have neither sources nor
targets. Really we should be thinking of faces as 2-morphisms between
1-morphisms (linear combinations of edges). For example, in our model
spacetime, draw f1 as a double arrow "====>" from the lower left of
the face (near v1) to the upper left (near v5). I'd do this, but it's
beyond my ASCII-Art patience to figure out how. What this means is
that f1 is a 2-arrow from (-e2+e1) to (e3-e4). Note the orientation
-- there's a "right-hand rule" involved here, which is equivalent to
my earlier choice of orienting all the faces ccw. With this way of
looking at faces, the s & t in the above now make sense.
<<<<<<<<<

In the above path language we have

f1 = e2 concateneted with e3 - e1 concatenated with e4

and hence

t f1 = e3 - e4
s f1 = e2 - e1.

So, indeed, the plaquette f1 can be regarded as a path/morphism from s f1 to
t f1 .


Gerard Westendorp

unread,
Nov 24, 2003, 7:02:21 AM11/24/03
to
Tim S wrote:

[..]

> OK, I want to learn more about this stuff, so I'm going to
> try to revive this thread, which has gotta be more productive
> than the Superstring Wars being fought over in some other
> threads.

Yes, there should surely be some cool stuff waiting out there
for us on this subject.

[..path integral normalization..]

> But I'm pretty sure that to pick up the correct measure from
> R^m, we need our basis vectors to be not only mutually

> orthogonal but also normalized,


> otherwise we end up with a gratuitous scale factor.

Or perhaps the path integrals so far are only weight factors
like exp(-E/kT). In other words, they reflect the relative
amplitude of a state. To get the actual amplitude, you have to
divide by some big sum over everything. At least I think that is
what the wizards were planning.

[..]

>
> There are two field equations:
>
> (1) dF = 0
>
> This is equivalent to div B = 0 and curl E + dB/dt = 0 and
> isn't very exciting, particularly in this case where F is a
> 2-form on a 2-lattice and hence dF is trivially zero. The
> other equation is more interesting:
>
> (2) d*F = *j
>
> where j is our source current density.
>
> Let's think about what this means.
>


At this point, I would like to point out that it is not actually
necessary to introduce the dual network at all. Not that I am
against it, it is just that a lot of attention seems focused on
things like the Hodge star, while in a discrete world,
everything can be completely defined by the primal network.

In a 1-complex, all you need, once you have laid out the
topology in your graph, is the equivalent of Ohm's law.
If you assign a scalar to each vertex, lets say a Voltage (V),
you the apply the coboundary operator to get
the set of [dV] on each edge. The [dV] are 1-chunks, the field
strength (E), multiplied by a length (dL).

Next, we convert the [dV] into currents (I. This is done by Ohms
law, which is in effect a Hodge dual and metric tensor all in
one linear relation. The I=jdS are (D-1) chunks, they D-1
dimensional current densities multiplied by a
(D-1)dimensional subspace-element.

The coboundary of a coboundary is zero, so the coboundary
of dV is zero. This is Kirchhoffs voltage law.

If we took the dual lattice, Kirchhoffs current law would be
that the coboundary of the I on the dual lattice is zero. But as
I claimed, we don't need the dual lattice. Rather than saying
the coboundary of I on the dual lattice
is zero, we can also say that is *boundary* of I is zero,
but referred to the primal lattice.

So in dual speak:

d^2V = 0 (Kirchhoffs voltage law)
I = *dV (Ohms law)
*dI = 0 (Kirchhoffs current law)

This becomes in primal speak:

Coboundary^2 (V) = 0 (Kirchhoffs voltage law)
I_ij = C_ij (V_i-V_j) (Ohms law)
Boundary (I) = 0 (Kirchhoffs current law)

Once again I want to stress that it is instructuive to say
things like "I is a 2-form", but this reflects only the
*relationship* between the disctete and continuous case, while
the discrete case is complete without it.

In a 2-complex, we have a value(A) on each edge. The F is defined as

F = dA -> dF = 0

So F is the coboundary of A. It is a kind of torque, a 2-chunk.
To convert it to a "mesh current", we again need a kind of Ohms law:

J_mesh = C_mesh F

Then, the boundary of J_mesh is

boundary (J_mesh) = j

This is equavalent to the expression:
*d*F = j, or d*F = *j

The action is simply

S= sum_i C_mesh_i Fi^2

Conjecture:

"The discrete Hodge star is equivalent to Ohm's law"

[..]

> In three dimensions, F is still a 2-form, so has 3 x 2/2 = 3
> independent components (3 degrees of freedom).
>
> *F is a 1-form.
> d*F is a 2-form, hence has 3 x 2/2 = 3 independent components.
>
> So d*F = *j imposes three constraints on our 3 degrees of freedom,
> again completely fixing the field.

hmm..
Remember the stuff on "Ghost of Ghosts"?
Well, here the D components of the (D-1) form *j are not independent,
but satisfy d*j = 0. So the number of dynamic components is one bigger.

This is important for the (2+1) D case, where you predict that no
waves exist. But because of the extra bonus, you can actually have
waves isn (2+1)D.

Gerard

Eric A. Forgy

unread,
Nov 25, 2003, 2:36:43 AM11/25/03
to

"Urs Schreiber" <Urs.Sc...@uni-essen.de> wrote:
> "Ralph Hartley" <har...@aic.nrl.navy.mil> schrieb:

> > Eric A. Forgy wrote:
>
> Hence without requiring special properties from P1 the only sensible way to
> get higher objects is to concatenate elements in P1 to get p-paths. For
> instance a 2-path is just a path consisting of 2 steps, which we denote by
> the ordered tuple (i,j,k), where (i,j) and (j,k) must be in P1. Only if it
> happens that all of (j,i), (k,j), (i,k), (k,i) are also elements of P1 can
> we form a 2-simplex
>
> [i,j,k] = (i,j,k) - (i,k,j) - (j,i,k) + (j,k,i) + (k,i,j) - (k,j,i)
>
> as a special sort of 2-paths. Hence the space of simplices (if it exists for
> a given P1) is a subspace of the space of paths. We would like to have a
> generalization of the boundary map @ to the space of paths such that the
> space of simplices (if it exists for a given P1) is in the kernel of @^2.

If you were interested in simplices, I would actually write a
2-simplex as

[i,j,k] = 1/3! [ (i,j,k) + (j,k,i) + (k,i,j) - (i,k,j) - (j,i,k) -
(k,j,i) ]

Then, if you so desired you could check that our definition of the
boundary map leads to the familiar alternating sum

@[i,j,k] = [j,k] - [i,j] + [i,j],

where

[i,j] = 1/2! [ (i,j) - (j,i) ].

The boundary of a simplex is a sum of simplices. That is nice, but we
have found that simplices are unnatural for building up a discrete
space within our framework. Instead, the natural objects that emerged
are what we call "n-diamonds".

A 2-diamond, unlike a 2-simplex, requires 4 nodes for its definition
and can be written in terms of simple 2-paths as

<il>_2 = (i,j,l) - (i,k,l) (note the first and last nodes coincide)

which looks like

i+---->----+j
| |
| \\ |
v \\| v
| --| |
| |
k+---->----+l

where that thing in the middle is supposed to be a "two dimensional
arrow". A 1-diamond is the same thing as a directed edge

<ij>_1 = (i,j).

I don't know how I did it, but here is a 3-diamond <i0i7>_3 :)


i4 +------>------+ i7
/| /|
^ | ^^^ ^ |
/ ^ /// / ^
i2 +------>------+ i5|
| | /// | |
|i3 +------>--|---+ i6
^ / ^ /
| ^ | ^
|/ |/
i0 +------>------+ i1

This shape is why Urs likes to call it "cubic" :) But these guys are
not necessarily rigid, i.e. they are deformable, but they have the
same connectivity as a cube.

Again that thing in the middle is supposed to be a "three-dimensional
arrow".

Just like simplices, the boundary of a p-diamond is a sum of
(p-1)-diamonds.

I should point out that a p-diamond is NOT the sum of p-simplices
because there are no "opposite" edges in a p-diamond, where the
definition of a p-simplex from paths absolutely requires opposite
edges. The arrows pick out a preferred direction.



> The naive adaption of the boundary map to the space of paths does not work,
> because we cannot in general remove intermediate edges from a p-path and get
> a (p-1)-path. For instance if (i,j,k) is an element of P2 (i,k) need not be
> an element of P1. The only deletions of edges that are guaranteed to produce
> an existing path are those of the first and of the last edge.
>
> This is the rationale behind defining on the space P = sum_p Pp of all paths
> the boundary map as
>
> @ = t + (-1)^p s,
>
> where t removes the first and s removes the last edge from any path.
>
> As Eric has discussed, geometrical objects can now be characterized as being
> elements in the kernel of @^2. All other objects are "pre-geometrical". In
> particular if P1 is such that it admits simplices, then @ reduces to the
> usual boundary map on such simplices.

It was fun to cook up classes of elementary paths for which @^2 = 0. A
simplex is one, a diamond is another, but these do not exhaust the
list of possible elementary "geometric" paths. We've concentrated our
efforts on n-diamond complexes because that is the case where all
differential geometric operation have natural counterparts in the
discrete world. I still don't know if these other elementary paths
will play a roll.

Gotta run!
Eric

Ralph Hartley

unread,
Nov 25, 2003, 8:00:45 AM11/25/03
to
Urs Schreiber wrote:
> "Ralph Hartley" <har...@aic.nrl.navy.mil>:

>>This is clunky as a definition because the bounds of a plane figure
>> doesn't naturally have two parts.
...

> I think that when you see examples you'll agree that it is not forced.
> Consider the following:

> Derek Wise wrote:
>
>: What this means is


>: that f1 is a 2-arrow from (-e2+e1) to (e3-e4).

But how is that related to paths (e1+e4)->(e2+e3) , (-e4-e1+e2)->(e3) or
even (e1+e4-e3-e2)->()? (Assuming I got the signs right, they all
correspond to the same simplex.)

>>I think it is enough to just have circles
>
> What do you mean by this?

Damned if I know :-)

Ralph Hartley

Urs Schreiber

unread,
Nov 25, 2003, 12:30:15 PM11/25/03
to
I wrote:

> We haven't yet had any discussion on what the correct generalization of
> A to the lattice should be. It's unnatural for it to be Lie-algebra
> valued on the lattice. We should rather have a holonomy-valued 1-form H.
> Because of the fact that 1-forms and 0-forms don't commute on the lattice
> it turns out that H^2 seems to be the correct generalization of gauge
> curvature to the lattice, i.e.
>
> d_A A -> {H,H} = 2 H^2 ,
>
> This is a cute little formula, I think, which nicely makes use of
> the special non-commutativity on the lattice. I have sketched
> some more details on pp. 57 of
> http://www-stud.uni-essen.de/~sb0264/p4a.pdf . I'd be interested
> to hear any comments on this.

Yup, it seems to turn out that this is the correct idea. We can write the
lattice YM action in the simple form

S = <H^2 | H^2>,

where H is the discrete holonomy 1-form and <|> is the Hodge scalar product
on the lattice. One can show that this is equivalent to the lattice Wilson
loop action (for instance equation (1.5) in
http://edoc.hu-berlin.de/dissertationen/necco-silvia-2003-05-15/PDF/Necco.pdf).
The (short) proof is on page 59 of the above mentioned notes.

The above concise form of S is maybe useful for something. At least the
equations of motion come out rather nicely:

The Biachi identity becomes the trivial

[H, H^2] = 0

and the equation of motion is

[H,.]^dag H^2|1> = 0 .

I note that already for U(1) gauge theory these equations are slightly
_different_ from the lattice versions of

d F = 0, del F = 0.

That's because F is taken to be Lie-algebra valued (as has been assumed
before in this thread), while on the lattice we should really only have
group-valued objects like H, I think.


I have been toying around with the idea of turning the simple <H^2 | H^2 >
into super-Yang-Mills theory on the lattice (but I haven't yet read any
literature on lattice SYM). There seems to be a nice way to write down a
gauge covariant lattice Dirac operator on gauginos (i.e. algebra-valued
fermions):

The ordinary Dirac-Kaehler action

S_f = <psi |(d+del) psi >

can be rewritten as

S_f = | <psi | d psi>|^2 .

We know how to turn d psi into a gauge covariant exterior derivative on the
lattice:

d psi --> [H,psi].

Hence one might try to use

S_f = | <psi | [H, psi] >|^2

as the fermionic part of the lattice YM action. It does not have any fermion
doubling, but I am not sure if it respects chiral symmetry in some sense.
(However the free <psi |(d+del) psi > does indeed have chiral symmetry on
the lattice, cf. pp. 62.)

Urs Schreiber

unread,
Nov 25, 2003, 12:36:07 PM11/25/03
to
"Ralph Hartley" <har...@aic.nrl.navy.mil> schrieb im Newsbeitrag
news:bptnea$1b1$1...@ra.nrl.navy.mil...

> > Derek Wise wrote:
> >
> >: What this means is
> >: that f1 is a 2-arrow from (-e2+e1) to (e3-e4).
>
> But how is that related to paths (e1+e4)->(e2+e3)

That's a plaquette with edges oriented differently than in the above
example.

Think about the same question in 1 dimension less, where the question would
read: How is the edge a --> b related to b --> a?

> (-e4-e1+e2)->(e3) or

I see your point. This is probably a formalism-dependent question. In the
formalism that Eric has described this would correspond to the source-->
target relations of the following 2-step path:

P = - e4 concat e3 - e1 concat e3 + e2 concat e3 .

But this P is not in the kernel of @^2.

> even (e1+e4-e3-e2)->()?

At least in the formalism that Eric described this cannot happen because any
path with a non-vanishing source has a non-vanishing target.

I think your question is: Is there anything special about Eric's way of
assigning targets and sources? I am not sure. It is "natural" and it works.
Probably Derek Wise or John Baez know more about this.

Eric A. Forgy

unread,
Nov 26, 2003, 3:28:08 AM11/26/03
to
This is actually an early edition response to Ralph's question even
though his post hasn't appeared yet as of the time of this writing :)

But he WILL ask this question...

>Urs Schreiber wrote:
>>> "Ralph Hartley" <har...@aic.nrl.navy.mil>:

>>>>> This is clunky as a definition because the bounds of a plane
>>>>> figure doesn't naturally have two parts.

>>> I think that when you see examples you'll agree that it is not
>>> forced.
>>> Consider the following:

>>> Derek Wise wrote:
>>>
>>>: What this means is


>>>: that f1 is a 2-arrow from (-e2+e1) to (e3-e4).
>
>

>But how is that related to paths (e1+e4)->(e2+e3) , (-e4-e1+e2)->(e3)
>or even (e1+e4-e3-e2)->()? (Assuming I got the signs right, they all
>correspond to the same simplex.)

Since I think this stuff is so fascinating, I want to avoid any
possible confusion. So let me go a little deeper in relating Derek's
original post to what Urs and I have been up to lately.

Ok, to try to make this a little more explicit, we can label all edges
and faces in terms of the node labels. To do this, I would redraw the
figure as


(4,5) (5,6)
(4)---->----(5)---->----(6)
| | |
| | |
| f1 | f2 |
| | |
(1,4)^ ^(2,5) ^(3,6)
| | |
| | |
| | |
| | |
(1)---->----(2)---->----(3)
(1,2) (2,3)

Note that I flipped the orientation of (6,5) to (5,6) from the
original figure, where (i,j) denotes a directed edge from node (i) to
(j).

Now, the 2-paths are obtained from the directed edges by
"concatenating" them, when concatenation makes sense. For example,

(1,2) "concatenated with" (2,5) = (1,2,5)

(1,2) "concatenated with" (4,5) = 0

because the end of (1,2) does not correspond to the beginning of
(4,5).

(1,2) "concatenated with" (1,4) = 0

for the same reason because the directed (1,2) and (2,1) are
considered distinct, in fact, (2,1) does not even appear in our graph.
If we wanted to, we could add opposite directed edges between each
pair of connected nodes, but there are reasons for not doing this that
we can explain if you are interested :)

Anyway, now we have several ways to interpret the 2-path (1,2,3). This
may be thought of as a true "path" in a directed graph passing through
the nodes

(1) -> (2) -> (3)

or we may think of (1,2,3) as a "two dimensional arrow" from

(1,2) ==> (2,3).

Since we will eventually build up two dimensional objects from these
"pre-geometric" 2-paths, the latter interpretation is the one we go
with so that

(1,2,3): (1,2) ==> (2,3).

Now, if we were to write up an actually paper to describe this stuff
using indices, it would be EXTREMELY NASTY to look at, but for
explanatory purposes, I think the index ridden notation is easier.

A fancier way to say the same thing is to just denote a 2-path by

|A>

and then |A> is interpretted as a two dimensional arrow from

s|A>

to

t|A>. In terms of indices, we would write this as

s(1,2,3) = (1,2)

t(1,2,3) = (2,3).

It is really completely trivial once you get past the notation
barrier. Believe me, we really tried hard to come up with the
best/clearest notation to describe something that is really best seen
with pictures.

Now, any p-path is a formal linear combination of simple p-paths
denoted

(0,...,p).

The source of a simple path is obtained by deleting the last node

s(0,...,p) = (0,...,p-1).

The target of a simple path is obtained by deleting the first node

t(0,...,p) = (1,...,p).

Again, as Urs pointed out, these are the ONLY (p-1)-paths that are
guaranteed to be elements of our n-graph obtainable by deleting a node
from a p-path (by construction).

Motivated by this, we give a modified definition of the boundary map

@ := t + (-1)^p s.

Since both t and s reduce the degree of a path by 1, so does the
boundary map. Furthermore, simple algebra shows that

@^2 = t^2 - s^2.

In general, this is NOT going to be zero. For example, consider our
figure and look at the simple 2-path (1,2,3)

@^2(1,2,3) = (3) - (1) != 0.

In fact, according to our figure, there is NO 2-path containing
(1,2,3) that would give a "boundary of a boundary" being zero.
Therefore, Urs and I would conclude that (1,2,3) is not a "geometric"
2-path.

Are there ANY combinations of 2-paths that give @^2 = 0???

First of all, let's list all the simple 2-paths in our figure:

(1,2,3), (1,2,5), (1,4,5), (2,3,6), (2,5,6).

Let me repaste the figure to save you from having to scroll up :)


(4,5) (5,6)
(4)---->----(5)---->----(6)
| | |
| | |
| f1 | f2 |
| | |
(1,4)^ ^(2,5) ^(3,6)
| | |
| | |
| | |
| | |
(1)---->----(2)---->----(3)
(1,2) (2,3)

According to this figure, the ONLY 2-paths that satisfy @^2 = 0 would
be

f1 = -(1,2,5) + (1,4,5)

f2 = -(2,3,6) + (2,5,6)

where the overall sign specifies a choice of orientation and I chose
mine to correspond to Derek's original choice. Therefore, we will
claim that any "geometric" 2-path must be obtained by formal linear
combinations of f1 and f2. in a manner that can be made precise if you
like, we say that f1 and f2 are elemetary 2-paths. We call the space
of "geometric" 2-paths, the space of "2-chains" in a nod to elementary
algebraic topology :).

The source of f1 is given by

s(f1) = -(1,2) + (1,4)

and the target of f1 is

t(f1) = -(2,5) + (4,5).

Therefore, we think of f1 as a two dimensional arrow from s(f1) to
t(f1).

The boundary of f1 is then given by

@(f1) = -(2,5) + (4,5) - (1,2) + (1,4).

The boundary of this is

@^2(f1)
= -[(5) - (2)] + [(5) - (4)] - [(2) - (1)] + [(4) - (1)]
= 0

as it should.

Since the space of p-chains is defined to be the subspace of 2-paths
for which @^2 = 0, it follows that the space of chains is closed under
the boundary map so that

@: C_p -> C_{p-1}.

To finally answer Ralph's question that he will ask :)

>But how is that related to paths (e1+e4)->(e2+e3) , (-e4-e1+e2)->(e3)
or
>even (e1+e4-e3-e2)->()? (Assuming I got the signs right, they all
>correspond to the same simplex.)

I hope that if you managed to read down this far that the answer to
your question becomes obvious.

A simple 2-path (i,j,k) is a map from (i,j) to (j,k).

If I translate your question into the more index ridden language, it
becomes

"But how is that related to paths

i.) [(1,4) + (4,5)] => [(1,2) + (2,5)],

ii.) [-(4,5)-(1,4)+(1,2)] => (2,5),

or

iii.) [(1,4) + (4,5) - (2,5) - (1,2)] => [0]?"

If I replace all of these with the generic question "Why not S => T?",
I would say that for each of these three situations you wrote, there
is no 2-path P for which s(P) = S and t(P) = T, which would be
required to write something lke S => T.

I hope I helped clarify things more than confuse them. Please let me
know if I succeeded or failed :)

Cheers!
Eric

Urs Schreiber

unread,
Nov 26, 2003, 6:30:12 AM11/26/03
to
"Eric A. Forgy" <fo...@uiuc.edu> schrieb im Newsbeitrag
news:3fa8470f.03112...@posting.google.com...

> If I replace all of these with the generic question "Why not S => T?",
> I would say that for each of these three situations you wrote, there
> is no 2-path P for which s(P) = S and t(P) = T, which would be
> required to write something lke S => T.

In addition, Ralph Hartley might find it interesting that there is a nice
physical interpretation of the special S=>T paths that are realized (as
opposed to those that are not realized):

As was mentioned before, a particularly nice and natural way to choose a
(flat) metric on the lattice is so as to turn all edges into future-pointing
lightlike edges, i.e. to consider n-diamond complexes. As Eric has
explained, the n-dimensional hypercubes of this complex can be considered as
n-paths. And the direction in which these n-paths point is
- the arrow of time.

As an illustration consider the beautiful ASCII-on-canvas depiction of a
3-cube that Eric has drawn before:

>>>


i4 +------>------+ i7
/| /|
^ | ^^^ ^ |
/ ^ /// / ^
i2 +------>------+ i5|
| | /// | |
|i3 +------>--|---+ i6
^ / ^ /
| ^ | ^
|/ |/
i0 +------>------+ i1


<<<

If all 1-edges in this picture are taken as lightlike and future-pointing
then time flows from i0 to i7. The cube is precisely the intersection of the
future of i0 with the past of i7. The fat diagonal arrow in the middle maps
all faces touching i0 to all faces touching i7, i.e. it maps (part of) the
future light cone of i0 to (part of) the past light cone of i7.

In this sense abstract n-paths are related to time evolution.

There is a nice dual description of this phenomenon, too. A special discrete
1-form on the lattice is G, which is the sum of all elementary 1-forms
(which are dual to the edges). For graphs without opposite and intermediate
(triangle-like) edges (like the cubic ones that we have been considering
here) G has the special property that for any p-form w we have

dw = [G,w],

where [.,.] is the supercommutator. In this sense G encodes the entire
differential geometry of the discrete space.

For this reason a natural question is: Is there a metric operator on the
discrete space's Hilbert space which is constructed from G alone? The answer
is that there is indeed a unique such operator, namely g ~ G^dag G - G G^dag
and that this g is the metric under which all edges are lightlike and future
pointing.

But this implies that

G ~ dt,

where t is the standard time coordinate on the discrete space. We have thus
found that on n-diamond complexes exterior differetiation is equivalent to
supercommutation with the time differential:

dw ~ [dt,w].

(In case anyone wonders I note again that for discrete spaces 1-forms and
0-forms don't commute and that therefore the above supercommutator is not as
trivial as it would be in the continuum.)

In the context of lattice YM theory it seems noteworthy that G also has a
further interpretation: It is the holonomy 1-form H (i.e. the form which
assigns the gauge holonomy to each edge) for vanishing gauge connection A.
For non-vanishing A the relation

dw = [G,w]

generalizes to

d_A w = [H,w],

whete d_A is the lattice version of the gauge-covariant exterior derivative.

Gerard Westendorp

unread,
Nov 27, 2003, 5:39:27 AM11/27/03
to
Eric A. Forgy wrote:


[..]

> First of all, let's list all the simple 2-paths in our figure:
>
> (1,2,3), (1,2,5), (1,4,5), (2,3,6), (2,5,6).
>
> Let me repaste the figure to save you from having to scroll up :)
>
>
> (4,5) (5,6)
> (4)---->----(5)---->----(6)
> | | |
> | | |
> | f1 | f2 |
> | | |
> (1,4)^ ^(2,5) ^(3,6)
> | | |
> | | |
> | | |
> | | |
> (1)---->----(2)---->----(3)
> (1,2) (2,3)
>


This is kind of intriguing...

What if we had selected the arrows:

(5,4) (5,6)


(4)----<----(5)---->----(6)
| | |
| | |
| f1 | f2 |
| | |

(1,4)^ V(5,2) ^(3,6)
| | |
| | |
| | |
| | |
(1)---->----(2)----<----(3)
(1,2) (3,2)

In that case, there would be no 2 paths at all!

You could also select the arrows so that you get a lot more
than 5 possible 2 paths. You might ask the question:
"Which selection of arrows gives the maximum number of
2 paths?"

or:
Is there a preferred way of choosing the arrows?

Gerard

Urs Schreiber

unread,
Nov 27, 2003, 6:25:20 AM11/27/03
to
"Gerard Westendorp" <wes...@xs4all.nl> schrieb im Newsbeitrag
news:3FC535DF...@xs4all.nl...

> Eric A. Forgy wrote:
[..]
> > First of all, let's list all the simple 2-paths in our figure:
> >
> > (1,2,3), (1,2,5), (1,4,5), (2,3,6), (2,5,6).
[...]

> This is kind of intriguing...
>
> What if we had selected the arrows:
>
> (5,4) (5,6)
> (4)----<----(5)---->----(6)
> | | |
> | | |
> | f1 | f2 |
> | | |
> (1,4)^ V(5,2) ^(3,6)
> | | |
> | | |
> | | |
> | | |
> (1)---->----(2)----<----(3)
> (1,2) (3,2)
>
> In that case, there would be no 2 paths at all!

Therefore this choice of arrows turns the 2-dimensional simply connected
chain complex into a 1-dimensional non-simply connected chain complex shaped
like an "8".

Dimension on discrete complexes is naturally defined as the maximum p for
which the complex contains p-paths that are in the kernel of @^2. Or dually,
the maximum p for which we can find discrete p-forms on the complex.

Even in the presence of 2-step paths there need not be any 2-forms. This is
illustrated in figure 2 of http://www-stud.uni-essen.de/~sb0264/p4a.pdf (see
also text on bottom of page 7).

> You could also select the arrows so that you get a lot more
> than 5 possible 2 paths.

Yes. Such scenarios are one reason for Eric's notion of pre-geometry and
path-algebra. Most of the choices of arrows won't yield any (p>1)-path whose
boundary has no boundary. The resulting discrete space is at best (locally)
1-dimensional, like the branched polymer example which I had mentioned
before.

> Is there a preferred way of choosing the arrows?

Yes. There are special choices of 1-paths that yield particularly nice
discrete differential calculi. This is studied in detail in section 2.3 of
the above mentioned notes. There it is found that graphs without
"intermediate" and without "opposite" paths are special.

(Here an "intermediate" path is a path (i,j) such that there is a k so that
the paths (i,k) and (k,j) both exist. An "opposite" path is a path (i,j)
such that also (j,i) is a path.)

For instance, the differential calculi on such graphs (or rather on the
complexes induced by them) admit a volume form and Hodge duality (section
4.3), have an unproblematic continuum limit (section 3.5) and in them the
very useful relation dw = [G,w] is satisfied (section 2.3).

It is interesting that the popular simplicial complexes have both
intermediate and opposite edges and hence do not enjoy these properties.
This is discussed on pp. 14, where we also mention a possible (but only
partial) workaround, called "glued simplices", which were first considered
by Eric Forgy and W. Chew.


Urs Schreiber

unread,
Nov 27, 2003, 9:37:29 AM11/27/03
to
"Urs Schreiber" <Urs.Sc...@uni-essen.de> schrieb im Newsbeitrag
news:3fc391a7$1...@news.sentex.net...

> before in this thread), while on the lattice we should really only have
> group-valued objects like H, I think.

The natural question then is: Following this philosophy, how do we put, say,
scalar fields in the adjoint rep on the lattice. These fields should be
0-forms and hence live on the vertices. Their lattice version is therefore
usually taken to be an adjoint rep valued function on vertices. I would like
to express here the vague idea that they should instead be modeled by
group-valued discrete 0-forms, which look algebra-valued in the continuum
limit. There are two reasons:

1) The first reason is rather formal: As I have mentioned, we can write down
pure lattice YM entirely in terms of abstract differential calculus
(essentially NCG) over the non-commutative algebra generated by elements
(delta_x,g) with delta_x a discrete delta function and g an element of some
group algebra and we use the obvious algebra product. Conceptually it
therefore seems desireable to stay within this framework, which forces all
objects to be g-valued.

2) The second reason is more "physical": Adjoint scalar matter appears for
instance in the Georgi-Glashow model (e.g. hep-th/9908171 (B.5)). But this
should be really thought of as the bosonic sector of dimensionally reduced
SYM, and this again means that what looks like scalar matter in this model
used to be components of a vector field/gauge connection in higher
dimensions, which does become group valued on the lattice.

To check if this could make sense consider the g-valued 0-form

phi = exp(s i Phi^a t_a),

where s is the lattice spacing and t_a are the Lie algebra generators. A
typical potential term might be

| tr phi^2 |^2

Expanding this in s yields

~ (Phi^a Phi_a - N/s^2)^2 + higher order terms in s .

To lowest order this is the Georgi-Glashow potential with parameter v =
N/s^2.

Hm, seems that this would mean that the Higgs scalar (in this model) gets a
mass proportional to 1 over lattice spacing s...

Please note that I am just toying with these ideas. Possibly it's nonsense.

Gerard Westendorp

unread,
Nov 30, 2003, 5:41:53 AM11/30/03
to
Urs Schreiber wrote:


[..]

> Dimension on discrete complexes is naturally defined as the maximum p for
> which the complex contains p-paths that are in the kernel of @^2. Or dually,
> the maximum p for which we can find discrete p-forms on the complex.

But isn't it possible to define 4-paths with
@^2=0 on a 2d lattice? For example, if you take a square grid, with
vertex coordinartes (i,j), tou could make 4-paths A and B:

A = (0,0) -> (0,1) -> (0,2) -> (1,2) -> (2,2)
B = (0,0) -> (1,0) -> (2,0) -> (2,1) -> (2,2)

And then compose A-B, which would have @^2=0.

Gerard

Urs Schreiber

unread,
Nov 30, 2003, 12:24:15 PM11/30/03
to
"Gerard Westendorp" <wes...@xs4all.nl> schrieb im Newsbeitrag
news:3FC92248...@xs4all.nl...
> Urs Schreiber wrote:

> > Dimension on discrete complexes is naturally defined as the maximum p
> > for which the complex contains p-paths that are in the kernel of @^2. Or

> > dually,the maximum p for which we can find discrete p-forms on the

> > complex.
>
> But isn't it possible to define 4-paths with
> @^2=0 on a 2d lattice?

I don't think so. The proof is easy in the dual picture: It is a direct
consequence of equation (38) in our notes. This equation says in words
that on graphs without intermediate edges (no triangles) the sum of all
products of elementary 1-forms that connect the same two points have to
vanish.

This means that

_ _ = 0

|
| = 0

and that
_
_| = - | .

From this is follows that there is no dual 3-step "path".

Note that this is a local version of the usual continuum relations

dx dx = 0
dy dy = 0

dx dy = - dy dx,

where the product is the wedge product.

>For example, if you take a square grid, with
> vertex coordinartes (i,j), tou could make 4-paths A and B:
>
> A = (0,0) -> (0,1) -> (0,2) -> (1,2) -> (2,2)
> B = (0,0) -> (1,0) -> (2,0) -> (2,1) -> (2,2)
>
> And then compose A-B, which would have @^2=0.

I am not sure why you think so. I get

@^2 A = [(0,2) -> (1,2) -> (2,2)] - [(0,0) -> (0,1) -> (0,2)]


@^2 B = [(2,0) -> (2,1) -> (2,2)] - [(0,0) -> (1,0) -> (2,0)]

and hence

@^2 (A-B) is nonzero.

How do you compute @^2 ?


Eric A. Forgy

unread,
Dec 1, 2003, 1:44:09 AM12/1/03
to
Hi Gerard,

Gerard Westendorp <wes...@xs4all.nl> wrote:

What you described would look like this.

o-->--o-->--o
| |
^ ^
| |
o o
| |
^ ^
| |
o-->--o-->--o

Although you are correct that IF this guy existed, then it WOULD have
@^2 = 0, but the catch (and maybe we didn't make this clear enough),
is that it cannot exist as a geometical object, i.e. 4-chain, because
it is composed of "non geometrical" objects. The individual edges

o-->--o-->--o

and

o
|
^
|
o
|
^
|
o


are NOT admissible 2-chains because they do not have @^2 = 0.

That is a good point though, so thanks for bringing it up :)

Eric

Eric A. Forgy

unread,
Dec 1, 2003, 12:13:39 PM12/1/03
to
fo...@uiuc.edu (Eric A. Forgy) wrote:
> What you described would look like this.
>
> o-->--o-->--o
> | |
> ^ ^
> | |
> o o
> | |
> ^ ^
> | |
> o-->--o-->--o
>
> Although you are correct that IF this guy existed, then it WOULD have
> @^2 = 0

[snip]

Ack! Unfortunately, I didn't catch the moderators in time to save me
from this embarassing mistake. I didn't check your claim that @^2 = 0
for this 4-path. Although the claim was incorrect, @^2 is NOT 0 for
your 4-path, but even if it were, it would still not be a legitimate
4-path for the reasons I went on to describe

> but the catch (and maybe we didn't make this clear enough),
> is that it cannot exist as a geometical object, i.e. 4-chain, because
> it is composed of "non geometrical" objects. The individual edges
>
> o-->--o-->--o
>
> and
>
> o
> |
> ^
> |
> o
> |
> ^
> |
> o
>
>
> are NOT admissible 2-chains because they do not have @^2 = 0.
>
> That is a good point though, so thanks for bringing it up :)

I hope that between Urs and I (in spite of my incorrectly agreeing
that @^2 = 0 for your 4-path), we have put your mind to rest a little
bit :)

What we have created is a perfectly valid formalism for doing lattice
field theory in a way that directly mimicks the continuum. Urs is
presently having fun running with it to apply it to superstrings on a
lattice. At this point, he has left me in the dust entirely :)

Eric

Eric A. Forgy

unread,
Dec 2, 2003, 6:38:17 PM12/2/03
to

Hi Gerard,

I posted an earlier response to this where I pointed out that your
example is not valid because your 4-paths consist of 2-paths for which
@^2 != 0 and that is not acceptible. However, there is an even worse
problem with your example that Urs pointed out to me via email and I
missed the first time.

Sorry, actually A-B does not satisfy @^2 = 0.

Remember that @^2 = t^2 - s^2. So we can compute the boundary of each
piece: A,B.

t^2 A = (0,2) -> (1,2) -> (2,2)

s^2 A = (0,0) -> (0,1) -> (0,2)

t^2 B = (2,0) -> (2,1) -> (2,2)

s^2 B = (0,0) -> (1,0) -> (2,0)

I hope you can see that @^2(A-B) = (t^2 - s^2)(A-B) != 0.

Therefore, your example of a 4-path A-B on a 2d lattice does not
satisfy @^2 = 0.

It is great that you are challenging us :)

based on your example, I can give another example that I imagine you
might dream of after getting this response.

Ack! But I gotta run! Maybe I'll wait and see if you can think of
other examples that might cause us some trouble in the meantime :)

Best regards,
Eric

Ralph Hartley

unread,
Dec 4, 2003, 6:35:14 PM12/4/03
to

It is getting clearer, but I'm still foggy abut a few points.

Eric A. Forgy wrote:

> Although you are correct that IF this guy existed, then it WOULD have
> @^2 = 0, but the catch (and maybe we didn't make this clear enough),
> is that it cannot exist as a geometical object, i.e. 4-chain, because
> it is composed of "non geometrical" objects. The individual edges

Ignoring the fact that (as you pointed out in a separate correction) the
example was broken, It seems that there are still odd cases, even with the
additional requirement that all n-1 paths making up a geometrical n path be
geometrical themselves.

Consider a triangle with cyclic arrows:

1-----A->----2
\ /
^ B
\ /
C v
\ /
\/
3

0 paths {1,2,3}
1 paths { A = 1->2 , B = 2->3 , C = 3->1 }
Primitive 2 paths
{ alpha = A->B = 1->2->3 ,
beta = B->C = 2->3->1 ,
gamma = C->A = 3->1->2 }

This is the only triangle that admits any geometrical 2 paths at all, and
it only admits one:

2 path I =
(alpha+beta+gamma) =
(A->B + B->C + C->A) =
(1->2->3 + 2->3->1 + 3->1->2)

Since geometrical 3 paths need to be built from geometrical 2 paths there
is only one possibility:

3 path J =
I->I =
(alpha+beta+gamma)->(alpha+beta+gamma) =
alpha->beta + beta->gamma + gamma->alpha = (the other terms vanish)
(A->B->C + B->C->A + C->A->C) =
(1->2->3->1 + 2->3->1->2 + 3->1->2->3)

@^2J = 3->1 - 1->2 + 1->2 - 2->3 + 2->3 - 3->1 = 0

so J is a geometrical 3 path.

It is easy to show that K = J->J is a geometrical 4 path and so on, so in
your scheme of things any (directed) triangle that can support a 2
dimensional structure can support an infinite dimensional one.

I think you get the same thing for a cyclic square etc.

Assuming I haven't made a mistake, or missed another implied restriction,
it would seem that you must really want to exclude cycles. If so, that
would mean excluding 2 dimensional triangles altogether. It seems odd to
have a discrete notion of geometry that discriminates between different
tessellations of continuous space to the extent of forbidding some of them
altogether.

I think that last comment is the key to my confusion. Shouldn't a discrete
notion of space deal impartially with different grids?

Ralph Hartley


Gerard Westendorp

unread,
Dec 8, 2003, 6:16:09 AM12/8/03
to
Eric A. Forgy wrote:

[..]


> Therefore, your example of a 4-path A-B on a 2d lattice does not
> satisfy @^2 = 0.

I guess I didn't look closely enough at your definitions!

>
> It is great that you are challenging us :)
>
> based on your example, I can give another example that I imagine you
> might dream of after getting this response.

I saw Ralph Hartley already came up with another "counter example",
the cycle.

Another thing that bothers me a bit is that nice features of the
"old" boundary operator should not get lost by going to the "new"
one.

For example, if you have a 2 complex:

--->------>------>---
| | | |
^ ^ ^ ^
| | | |
--->------>------>---
| | | |
^ ^ ^ ^
| | | |
--->------>------>---


In the "old" formalism, we could define 6 2-forms, having a
counter-clockwise circular arrow in each of the 6 squares.
There would be 17 edges.
In the "Plaquaette to edge" connectivity (6X17) matrix,
the plaquettes would all have a +1 entry for its the upper and
left edges, and a -1 entry for the lower and right side, and the
rest of the matrix would be zero's. You could use this matrix to
calculate boundaries, but a pictorial way is perhaps easier. The
computed boundary is identical to th "pictorial" boundary, ie,
one you would intuitively draw if you were asked to.

A nice property of the old boundary operator is that a boundary
of a larger region is just what you expect of a boundary, it is
the line around the area; all inner edges cancel out. For higher
dimensions, the pictorial correspondence remains intact.

If I interpret you correctly, which unfortunately I do not always
do, in the "new" formalism, you also get 6 plaquettes, which are
the 6 squares, or "diamonds". (So you exclude the 2 paths that
do not have a 90 degree turn in them?) Then, you can apply the
boundary operator on the 6 plaquetttes. For each plaquette, you
get 4 1-paths, with a sign that I will get wrong, so I will
leave it out.
Hmm..
OK, so the intermediate edges *do* cancel. I was writing
this to show that they don't, but it is maybe nice not to
delete the arguments.

So if I have an n-dimensional object, and I take its
boundary, do I get an n-1 dimensional object that corresponds
to the intuitive boundary?

Gerard

Eric A. Forgy

unread,
Dec 9, 2003, 5:11:04 PM12/9/03
to
Hi Ralph!

Ralph Hartley <har...@aic.nrl.navy.mil> wrote:

> It is getting clearer, but I'm still foggy abut a few points.

Thank you very much for your post. It sparked off a series of
behind-the-scenes dialog between Urs and I. He and I are kind of like
the Yin and Yang of collaborators. Besides the fact that he is sharp
and I'm dense, he prefers to work in the "dual" picture, where you
deal with cochains, coboundary, and products thereof. I prefer to work
with chains, boundary, and products thereof.

If you take a look at his notes, it begins with a graded differential
algebra

Omega = (+)_{p=0}^D Omega^p

with a derivation d:Omega^p->Omega^{p+1} satisfying

i.) d^2 = 0
ii.) d(ab) = (da)b + (-1)^|a| a(db),

where |a| is the degree of a. Note that i.) and ii.) are taken as
axioms.

Although there is nothing wrong with taking a few axioms as a starting
point, I try to dig a little deeper and see what lies beneath. From my
early work, I stumbled across those very two expressions (which look a
lot like exterior calculus on a simplicial grid) in algebraic
topology, where you first define chains and a nilpotent boundary map
@. Then you define cochains and the coboundary

<dA|c> := <A|@c>.

From this and a natural definition of cup product (on the cochain
level prior to passing to cohomology), then i.) and ii.) pop out as
consequences. Not axioms. I somehow think this is a more natural
approach, but then again beauty is in the eye of the beholder :)

However, as we have discussed previously in this thread, the usual
boundary map in algebraic topology is a bit unnatural for paths on a
directed n-graph. Hence, I introduced my modified boundary map. I
thought that by then considering the subspace of paths for which @^2 =
0 would lead to an equivalent dual formulation to what Urs has in his
notes.

Your example:

> Consider a triangle with cyclic arrows:
>
> 1-----A->----2
> \ /
> ^ B
> \ /
> C v
> \ /
> \/
> 3

illustrated that I was not 100% correct in my approach that was
intended to be dual to Urs'.

If you don't mind, I would like to relabel your paths a little more
explicitly.

0-paths: |1>, |2>, |3>
1-paths: |12>, |23>, |31>
2-paths: |123>, |231>, |312>
3-paths: |1231>, |2312> , |3123>
...

Following my prescription, we can write down (as you did) the
"elementary geometrical paths" or chains that live on this lattice.

0-chains: |1>, |2>, |3>
1-chains: |12>, |23>, |31>
2-chains: |123> + |231> + |312>
3-chains: |1231> + |2312> + |3123>
4-chains: |12312> + |23123> + |31231>
...

What I mean (roughly) by an elementary chain is that any other chain
my be expressed as a linear combination of these. Note that C_0 and
C_1 (the space of 0- and 1-chains) are both 3-dimensional, while the
spaces of p-chains C_p for p>1 are all 1-dimensional (or so it seems).

Ok. So far so good (I hope!) :)

At this point, I should probably jump ship and start discussing
cochains, because it would probably be more clear that way, but I'll
see how far I can get with chains.

We began with p-paths on a directed n-graph and ended up with p-chains
as a certain subspace of p-paths for which @^2 = 0. Here is the
critical point.

==============
Critical Point
==============

We now have TWO algebras. One is a "path algebra" and the other, that
so far has not been discussed here, is the "chain algebra."

If it is starting to sound like I have gone over the deep end, let me
try to reassure you that the very same (analogous) thing happens in
the continuum theory :)

In the continuum we have a "tensor algebra" and we have an "exterior
algebra." The path algebra is the counterpart to the tensor algebra,
whereas the (co)chain
algebra is the counterpart to the exterior algebra.

Recall that exterior product is essentially the tensor product
followed by some kind of projection. In the case of the continuum
theory, the projection is antisymmetrization. In the lattice theory,
the analog of tensor product is concatenation and the analog of
exterior product is concatenation followed by some kind of projection
(expansion is probably a better word since antisymmetrization results
in MORE terms than you started with, but oh well. The space is smaller
I guess :)).

Using your very helpful example

> 1-----A->----2
> \ /
> ^ B
> \ /
> C v
> \ /
> \/
> 3

I can now define this projection map, which is the version of
antisymmetrization, that corresponds to your lattice,

0-chains: A(|i>) = |i>
1-chains: A(|ij>) = |ij>
2-chains: A(|ijk>) = 1/3 (|123> + |231> + |312>)
3-chains: A(|ijkl>) = 1/3 (|1231> + |2312> + |3123>)
...

[Note that each lattice will come with its very own projection map. A
diamond lattice ends up looking a lot more like true
antisymetrization.]

The last two lines follow from the fact that there is only one p-chain
for p>1 so any projection to chains must map to this one chain. The
coefficient is chosen so that A^2 = A.

Ok. So far so good (I hope!) :)

So far I haven't said anything that might reassure you about your
troubling example. Now, I'll try to compensate for that.

Let me now define a product of chains. As a product of chains, it must
take two chains and return a chain (obviously). The product will be
defined in terms of concatenation, but in general the concatenation of
two chains will NOT be a chain. Just as the tensor product of two
forms is not a form. We will then need to project to get a chain from
the resulting path.

To make the analogy with the continuum theory more pronounced, I will
write concatenation as (x) and concatenation followed by projection as
/\. Therefore, given two chains a and b, we can define their product
to be

a /\ b := A(a (x) b).

Note that although I use the letter "A" to make the analogy with the
continuum clear, it is NOT antisymmetrization (as the table above
illustrates).

Finally, I can use this to show that although your example does
provide examples of things that look like geometrical paths, i.e.
cochains, that zoom off to infinite dimensions, the cure is that all
these higher degree examples exactly vanish, i.e. they are all equal
to the zero p-chain :)

In my list of valid p-chains, I failed to include the "0" p-chain :)

Of course, @^2 0 = 0, so that 0 is a perfectly valid p-chain :)

Now, I can show that your worrisome examples of p-chains for all p>1
are all equal to the zero p-chain :)

Maybe I'll spare you the gruesome details, but if you'd like me to
spell it out, let me know. Otherwise, you can probably learn more by
taking a look at Urs' notes in the "dual" picture.

> It is easy to show that K = J->J is a geometrical 4 path and so on, so in
> your scheme of things any (directed) triangle that can support a 2
> dimensional structure can support an infinite dimensional one.

Yep! :) But all these are equal to zero :) It is kind of like how
p-forms for
p > D, where D is the dimension of the manifold, vanish :) Your
example is 1-dimensional, so that p-chains for p>1 vanish.

> I think you get the same thing for a cyclic square etc.

Probably. But there, they must vanish as well :)

> Assuming I haven't made a mistake, or missed another implied restriction,
> it would seem that you must really want to exclude cycles. If so, that
> would mean excluding 2 dimensional triangles altogether. It seems odd to
> have a discrete notion of geometry that discriminates between different
> tessellations of continuous space to the extent of forbidding some of them
> altogether.
>
> I think that last comment is the key to my confusion. Shouldn't a discrete
> notion of space deal impartially with different grids?

I don't know if a discrete space should deal completely impartially
with respect to different grids. After all, the grid determines the
dimensionality of the space at the least. You would expect some kind
of constraints appearing.

I do agree with your point that the grid should be able to handle
triangles. The good news is that our grids DO handle triangles. Sort
of :)

In the discrete calculus, a natural thing to emerge was the
interpretation of an arrow as determining a flow of time (as Urs
explained).

Once you accept the fact that the geometric grids we consider have a
built in notion of time evolution, then you can see that a
2-dimensional triangle in space, must evolve in time.

I'll try to draw it, but I don't have very high expectations :)


> b(i+1) +
> / \
> / \
> ^ v
> / \
> / \
>a(i) +-----<-----+ c(i+2)

Beginning at node a at time step i, denoted a(i), an edge
a(i)->b(i+1), ends up at node b at time i+1, i.e. b(i+1). Then the
edge b(i+1)->c(i+2) ends up at node c at times step i+2. Then, if you
were to close the loop, you'd have an edge c(i+2)->a(i+3) that ends up
at node a at time i+3. Although the labels are the same, the nodes
a(i) and a(i+3) are to be considered distinct points in the discrete
space-time.

In fact, you can have all the various "loops" happening
simultaneously, e.g.

> b(i) +
> / \
> / \
> ^ v
> / \
> / \
>a(i+2) +-----<-----+ c(i+1)


> b(i+2) +
> / \
> / \
> ^ v
> / \
> / \
>a(i+1) +-----<-----+ c(i)


> b(i+2) +
> / \
> / \
> v ^
> / \
> / \
>a(i) +----->-----+ c(i+1)

etc etc.

I don't know if this helps sort out any confusion or introduces more.
Please let us know either way :)

Best regards,
Eric

PS: As a sidenote, a true cycle, as in your example, would represent a
closed-timelike loop.

Eric A. Forgy

unread,
Dec 10, 2003, 10:39:53 AM12/10/03
to
Most of the material in this post is primarily due to Dimakis and
Mueller-Hoissen.

Ralph Hartley <har...@aic.nrl.navy.mil> wrote:
> Eric A. Forgy wrote:

Hi Ralph,

After posting that long-winded (partial) explanation based on chains,
now I'll try to explain it in the dual picture with cochains as in
Urs' notes.

What we have there is an (associative) graded algebra Omega with a
derivation

d: Omega^p -> Omega^{p+1}

satisfying

i.) d^2 = 0


ii.) d(ab) = (da)b + (-1)^|a| a(db),

where |a| is the degree of a. For more subtle points, see the notes.

In this approach, paths and copaths never really enter the picture. We
deal with chains and cochains right from the beginning. As we will
see, this is primarily due to the fact that we are taking d^2 = 0 as
an axiom.

The next step is to consider a countable set of elements {e^i} and
form a commutative algebra Omega^0 whose vector space is obtained by
taking formal linear combinations of {e^i} and defining a product via

iii.) e^i e^j = delta_{ij} e^i,

where delta_{ij} is the Kronecker delta (not to be confused with the
delta's in Urs' notes, which I am trying hard to get him to change :)
In fact, Urs' "delta_" is replaced here by my "e^".).

Beginning with this commutative algebra and the axioms for our
differential algebra, we can start turning the crank and derive all
kinds of neat consequences.

Probably the most important one for now is

e^i de^j
= e^i d(e^j e^j) <- By iii.)
= e^i [(de^j) e^j + e^j (de^j)] <- By ii.)
= e^i (de^j) e^j + delta_{ij} e^i de^j <- By iii.)

This means that

iva.) e^i (de^j) e^j = (1 - delta_{ij}) e^i de^j.

We then define

ivb.) e^{ij} := e^i (de^j) e^j.

We interpret e^i as (relating to) a node i in some abstract space and
e^{ij} as (relating to) a directed edge connecting node i to node j. A
complete graph would have edges connecting each pair of nodes.
However, we want to consider more specialized graphs where there may
not be directed edges connecting some pairs of nodes. To handle this,
we set e^{ij} = 0 if there is no edge i->j in the graph. As a
consequence, this means we have no "self loops" because by iv.), we
see that e^{ii} = 0.

You can probably see where concatenation is coming into the picture
because we can now define

v.) e^{i0...ip} = e^{i0i1} e^{i1i2} .... e^{i{p-1}ip}

Also, iv.) guarantees that

e^{i0...ip} e^{j0...jq} = delta_{ipj0} e^{i0...ipj1...jq}.

However, things are a little more subtle than they might seem because
e^{i0..ip} is a cochain already because d^2 = 0, so these are not
directly mapped to what I have been calling copaths.

Although it is a little more subtle than the previous calculations, we
can obtain an explicit expression for de^{i0...ip} from the
axioms/definitions we have so far. It is given by

de^{i0...ip} = sum_{r=0}^{p+1} (-1)^r e^{i0...insert(r)ir...}.

For example,

de^i = sum_r (e^{ri} - e^{ir})

de^{ij} = sum_r (e^{rij} - e^{irj} + e^{ijr}).

To illustrate, we can use your example again:

> 1-----A->----2
> \ /
> ^ B
> \ /
> C v
> \ /
> \/
> 3

For this example, we will have

de^1 = e^{31} - e^{12}
de^2 = e^{12} - e^{23}
de^3 = e^{23} - e^{31}

and

de^{12} = e^{312} + e^{123}
de^{23} = e^{123} + e^{231}
de^{31} = e^{231} + e^{312}.

Here is where the subtle (perhaps strange) part comes in. Now we can
compute

d^2 e^1
= de^{31} - de^{12}
= (e^{231} + e^{312}) - (e^{312} + e^{123})
= e^{231} - e^{123}

Similarly,

d^2 e^2 = e^{312} - e^{231}
d^2 e^3 = e^{123} - e^{312}.

However, recall that d^2 = 0 is taken as an axiom. What this does is
force relations among the 2-cochains. Namely, we now have

e^{123} = e^{231} = e^{312}.

This is why we can't really think of e^{ijk} as a "copath" because for
copaths, we have <ijk| != <jki|.

Now, to demonstrate that your p-chains for p>1 in your example vanish,
we can consider the dual picture

e^{123}
= e^1 e^{123}
= e^1 e^{231}
= e^1 e^{312}
= 0

because

e^i e^{jkl} = delta_{ij} e^{jkl}.

Therefore, we have

e^{123} = e^{231} = e^{312} = 0.

Of course, this kills all p-cochains for p>1. Therefore, you example
is 1-dimensional.

This is a purely algebraic demonstration. What I gave in my prior post
was an attempt at a more geometrical demonstration. If I had
continued, I would have shown that

A(|123>)
= |1> /\ A(|123>)
= 1/3 A(|123>)

which means that

A(|123>) = 1/3 (|123> + |231> + |312>) = 0.

This is kind of dual to what I showed in the present post.

Either way, we see that your worrisome p-chains for p>1, although
"geometrical" in the sense that I defined it, vanish for p>1. In a
nutshell, you are abslutely correct about your example. We just hadn't
gotten far enough along yet to demonstrate that those particular
chains vanish.

Best regards,
Eric

Ralph Hartley

unread,
Dec 13, 2003, 12:29:08 AM12/13/03
to Eric A. Forgy
Eric A. Forgy wrote:

> Ralph Hartley <har...@aic.nrl.navy.mil> wrote:

>> 1-----A->----2
>> \ /
>> ^ B
>> \ /
>> C v
>> \ /
>> \/
>> 3

...


> If you don't mind, I would like to relabel your paths a little more
> explicitly.
>
> 0-paths: |1>, |2>, |3>
> 1-paths: |12>, |23>, |31>
> 2-paths: |123>, |231>, |312>
> 3-paths: |1231>, |2312> , |3123>

I don't mind at all, I was intentionally a bit redundant.

> we can write down (as you did) the
> "elementary geometrical paths" or chains that live on this lattice.
>
> 0-chains: |1>, |2>, |3>
> 1-chains: |12>, |23>, |31>
> 2-chains: |123> + |231> + |312>
> 3-chains: |1231> + |2312> + |3123>
> 4-chains: |12312> + |23123> + |31231>
...

> I can now define this projection map, which is the version of
> antisymmetrization, that corresponds to your lattice,
>
> 0-chains: A(|i>) = |i>
> 1-chains: A(|ij>) = |ij>
> 2-chains: A(|ijk>) = 1/3 (|123> + |231> + |312>)
> 3-chains: A(|ijkl>) = 1/3 (|1231> + |2312> + |3123>)
...

> given two chains a and b, we can define their product
> to be
>
> a /\ b := A(a (x) b).
>

> Finally, I can use this to show that although your example does
> provide examples of things that look like geometrical paths, i.e.
> cochains, that zoom off to infinite dimensions, the cure is that all
> these higher degree examples exactly vanish, i.e. they are all equal
> to the zero p-chain

(I don't like to quote this much, but I like to make each post make sense
by itself)

OK, lets try an example:

|12> /\ |23> =
A(|12>(x)|23>) =
A(|123>) =


1/3 (|123> + |231> + |312>)

Which doesn't *appear* to be 0. But wait! So far I only know the definition
of equality for *paths*. I wouldn't be surprised if with a sensible
definition for *chains*, it *is* equal to 0.

> Your example is 1-dimensional

But that isn't the answer I wanted! My example was the only remaining way
to allow a triangle, and you conclude that it doesn't make one either.

A triangle is a *2*-dimensional figure. I was asking how (in your scheme) a
triangle could be *exactly* 2-dimensional.

You appear to be saying that you don't allow triangles at all. You do allow
the three sides that would be the (1-dimensional) boundary of a triangle if
triangles were allowed, but you have no place for the interior of a triangle.

I have bit of a problem with a scheme which does not *admit* the concept of
"triangle" claiming to represent "geometry".

>>I think that last comment is the key to my confusion. Shouldn't a discrete
>>notion of space deal impartially with different grids?

> I don't know if a discrete space should deal completely impartially
> with respect to different grids. After all, the grid determines the
> dimensionality of the space at the least. You would expect some kind
> of constraints appearing.

I would expect those constraints to be at least related to the constraints
geometry puts on the ways to put a grid on a space. E.g. I wouldn't expect
to be allowed to have a 2-d grid with translational symmetry and 5-fold
rotational symmetry (bla, bla, fine print, bla).

But I *would* expect a triangular lattice to be a valid discrete 2-d space
or a valid 1+1d space-time, with time in any direction.

> I do agree with your point that the grid should be able to handle
> triangles. The good news is that our grids DO handle triangles. Sort
> of :)

The boundary of a triangle is not a triangle. I want triangles to be as
good a squares!

> In the discrete calculus, a natural thing to emerge was the
> interpretation of an arrow as determining a flow of time

> I don't know if this helps sort out any confusion or introduces more.


> Please let us know either way :)

> PS: As a sidenote, a true cycle, as in your example, would represent a
> closed-timelike loop.

I think I understand, but I don't think it is really important to my point.

The problem seems to be that you only allow concatenation of paths of the
same degree, i.e. |12>(x)|23> = |123> but not something like |123>(x)|31> =
|1231> (which isn't quite right, perhaps a 3 argument operator
(Cycle)(|12>,|23>,|31>) = C123>), which is something you *need* to define
triangles, since the boundary of a triangle can't be divided into 2 equal
parts.

To really allow all of geometry you need to generalize paths to graphs, and
work in the "thingy" of graphs instead of the category of paths (graphs are
a "thingy" instead of a category because there are more ways to "compose"
them). See my other thread on "thingies", which was directly inspired by
earlier posts in this thread.

Ralph Hartley


Eric A. Forgy

unread,
Dec 13, 2003, 5:24:45 AM12/13/03
to
Hi Gerard :)

Gerard Westendorp <wes...@xs4all.nl> wrote:
> Eric A. Forgy wrote:
>

> I saw Ralph Hartley already came up with another "counter example",
> the cycle.

I just wrote a humongous response to Ralph's question. The punchline
is that his lattice turns out to be 1-dimensional. All those
geometrical paths that he illustrated for p > 1 are all equal to the
"0" chain :) It was an excellent example.

> Another thing that bothers me a bit is that nice features of the
> "old" boundary operator should not get lost by going to the "new"
> one.

I assure your that they are not lost :)

> For example, if you have a 2 complex:
>

> +--->--+--->--+--->--+
> | | | |
> ^ ^ ^ ^
> | | | |
> +-->---+-->---+-->---+
> | | | |
> ^ ^ ^ ^
> | | | |
> +--->--+--->--+--->--+

This is a prime example of a section of a "2-diamond complex." It
consists of unit 2-diamonds

> 3+--->---+4
> | |
> ^ ^
> | |
> 1+--->---+2

We would express this elementary 2-diamond algebraically as

|124> - |134>

The boundary of this is expressed algebraically as

@(|124> - |134>)
= [|24> + |12>] - [|34> + |13>]
= |12> + |24> - |34> - |13>

which is exactly what you would expect "pictorially."

> So if I have an n-dimensional object, and I take its
> boundary, do I get an n-1 dimensional object that corresponds
> to the intuitive boundary?

Yes. But the catch is that some "objects" are not allowed. The most
surprising one would be a simplex. However, once you take the
space-time interpretation literally (like we do), then you can have
arbitrary shapes objects including simplices that are propagating
through time. It is this latter point that is crucial. Our framework
doesn't seem to be capable of handling space without time since time
is built into each arrow.

Eric

Eric A. Forgy

unread,
Dec 14, 2003, 7:08:47 AM12/14/03
to
Hi Ralph! :)

Ralph Hartley <har...@aic.nrl.navy.mil> wrote:
> Eric A. Forgy wrote:
> > Ralph Hartley <har...@aic.nrl.navy.mil> wrote:

> But that isn't the answer I wanted! My example was the only remaining way
> to allow a triangle, and you conclude that it doesn't make one either.
>
> A triangle is a *2*-dimensional figure. I was asking how (in your scheme) a
> triangle could be *exactly* 2-dimensional.
>
> You appear to be saying that you don't allow triangles at all. You do allow
> the three sides that would be the (1-dimensional) boundary of a triangle if
> triangles were allowed, but you have no place for the interior of a triangle.
>
> I have bit of a problem with a scheme which does not *admit* the concept of
> "triangle" claiming to represent "geometry".

Believe me. So did I! :) In fact, I spent my entire dissertation
research looking for one! :) The fact that I failed should not be
taken as an indication that one does not exist, but to me, just
finding a formalism of discrete differential geometry at all was
enough for me to consider the possibility that maybe my mental bias
toward triangles had been the barrier to progress all along.

I wrote up a VERY long explanation of the short history of Urs' and my
discovery complete with our motivation, but apparently the moderators
didn't like it. Their notification never reached me because this email
address is no longer active. Oh well. I should have inserted more
physics content :)

> But I *would* expect a triangular lattice to be a valid discrete 2-d space
> or a valid 1+1d space-time, with time in any direction.

I totally see where you are coming from. However, if you insist on
this, I doubt that you will make much progress. The key insight to
making disrete differential geometry "work" is a very subtle one that
Urs' took for granted. It was obvious to him, but it really allowed
all the pieces I had been working on over the years to FINALLY fall
into place. That is if your lattice is supposed to be representing D =
(D-1)+1 dimensional space-time, there must be D edges directed away
from each node. Urs insisted on this from the beginning, otherwise he
could not define a sensible veilbein. At first, I resisted for the
very reasons you are complaining about. It means that our lattice can
NOT be built up from simplices. This is really the crucial point of
the whole thing.

If you finally accept the idea long enough to contemplate the
consequences, the first complaint I had was that it would seem to
introduce a preferred direction in the grid. This was based upon my
erroneous assumption that there would be D-1 directed edges defined at
one instant in time, where the remaining edge is pointing directly
into the time direction at a single point in space. The eureka moment
came when it dawned on me that you can turn the edges upward so that
all D of them are light-like. By doing this, we eliminate the
preferred drection in space that was bothering me, but we introduce a
another preferred direction, time :)

Now, I hope the moderators let this through because we ARE talking
about physics. Or at least headed in that direction. All of this is
required for lattice gauge theory "done right". As Urs has mentioned,
this approach to lattice gauge theory appears to have many distinct
advantages over what we have seen in the literature so far.

> > I do agree with your point that the grid should be able to handle
> > triangles. The good news is that our grids DO handle triangles. Sort
> > of :)
>
> The boundary of a triangle is not a triangle. I want triangles to be as
> good a squares!

DoH! I'm sorry if my drawings were not clear enough :)

There is a BIG difference between the 1-path

|ij'> + |j'k''> + |k''i'''>

and the 3-path

|ij'k''i'''>,

where the # of primes denote the time step. In those figures I drew, I
meant that I was considering the corresponding 3-paths and NOT the
1-paths.

In our framework, those triangle-like structures I drew are actually
3-dimensional. Not 1-dimensional. They represent the surface of a
triangle sweeping through time.

This would be the building block for a (2+1)d lattice:

> i4 +------>------+ i7
> /| /|
> ^ | ^^^ ^ |
> / ^ /// / ^
>i2 +------>------+ i5|
> | | /// | |
> |i3 +------>--|---+ i6
> ^ / ^ /
> | ^ | ^
> |/ |/
>i0 +------>------+ i1

We call it a "3-diamond" and the corresponding lattice a "3-diamond
complex".

Just as any smooth manifold M can be triangulated, I believe that any
manifold that can written as M x R can be diamonated :) We haven't
proved this yet. Perhaps the relation should be something like, "If a
manifold admits a foliation, it is diamonatable." :)

Now, if you stare at that figure, you can see that the nodes
{i1,i2,i3} and {i4,i5,i6} each define something that has the right to
be interpretted as being a two-dimensional triangle although they are
really just snap shots of a 3-diamond at particular instances in time.
There are no true "triangles" at an instant in time because there are
no edges at an instant in time. All edges are light-like.

I know this probably isn't satisfying to you, but it is the best we
can do and still formulate a working theory of discrete differential
geometry that can be applied to lattice gauge theory.

> The problem seems to be that you only allow concatenation of paths of the
> same degree, i.e. |12>(x)|23> = |123> but not something like |123>(x)|31> =
> |1231> (which isn't quite right, perhaps a 3 argument operator
> (Cycle)(|12>,|23>,|31>) = C123>), which is something you *need* to define
> triangles, since the boundary of a triangle can't be divided into 2 equal
> parts.

Concatenation is definitely not defined only for paths of the same
degree. It is defined for all paths. They key point is that you might
end up with nonvanishing paths that "project" to vanishing chains. For
example, in D dimensions, the tensor product of a p-form and a q-form,
where p+q > D is not zero. However, once you antisymmetrize, it gets
projected to zero because there are no (p+q)-forms for p+q>D. It is
almost precisely the same thing here, but the projection will depend
on your lattice. For diamond complexes, the projection is very similar
to antisymmetrization, but not quite. For other lattices, it could be
quite different.

[snip about "thingies"]

I don't really understand your "thingies", but all I can say is that
we do have a perfecly fine theory of geometry :) The, perhaps
unsatisfying, thing about it is that it comes with a built in notion
of time evolution. So if you are interested in geometry without time
evolution, I don't think our framework is what you want. Fortunately,
we are interested in physics and having time tends to come in handy :)

I'm guessing you wrote this before my second "dual" response to your
question appeared. I'm hoping that with that and this post that we are
starting to get the idea across more clearly.

Best regards,
Eric

PS: I was about to send this and thought that perhaps I was being too
hard on ourselves :) If you wanted to study geometry without time
evolution, you could still do so, you would just make sure that
nothing changed from one instant to the next. This is certainly doable
within our framework.

Tim S

unread,
Feb 22, 2004, 6:07:51 AM2/22/04
to

[Hmm, sent this a while ago, but it doesn't seem to have appeared and I
haven't seen a rejection. Apologies if it appears twice or I missed the
bounce.]

[I guess this is really a followup to my post on classical EM on a lattice,
but that's expired off the server, I've lost my copy, and I'm not sure how
to make the headers go right by hand. Anyway, this post belongs in this
thread and this is as good a place for it as any.]

OK, and so on to Part 2 of my occasional series _Random Blunderings In
Lattice Gauge Theory_. Before you start, I should warn you that much of what
follows will be agonisingly simple to many of you -- perhaps to most of you.
I therefore invite you -- nay, beg you -- to skip ahead over parts that are
blindingly obvious, before you are blinded by their obviousness. Really, I'm
just teaching myself representation theory in public in the most shameless
manner. However, we end up with spin networks, so I hope this process will
be of some benefit to a few interested bystanders.

And, having carelessly dropped that particularly tantalising scented silk
handkerchief onto the gravel path, the coquettish Lady Mathematics de
Physics retires to her boudoir to await the most ardent of her lovers...

But I digress.

What to expect

This time we're going to quantise. Since quantisation is often difficult and
can introduce many extraneous complications, I'd like to quantise something
simple. Preferably, it should be something which is as simple as possible
while retaining some sort of essence of lattice gauge theory; or if not an
actual essence, at least a detectable resemblance to lattice gauge theory.

Now, U(1) bundles over a graph may seem simple enough to you, and R bundles
even more so, but for my humble purposes they're still too complicated. For
a start, their fibres are one-dimensional, and from where I stand, that's
one dimension too many. So instead I'm going to work with a simpler bundle,
the bundle of orientations of a square.

The fibre of this bundle is the set of ways to put a square peg into a
square hole (which has at least the merit of being more interesting than the
set of ways to put a square peg into a round hole). This set has four
members, and they are related to one another by rotations of the square. The
corresponding symmetry group is the cyclic group on 4 elements, C_4 (aka
Z_4, Z/<4> and Z/(4)).

C_4: what a great group. It's discrete, it's compact, it's Abelian, it's
very, very small, and it has many other excellent properties which I shan't
bore you with just yet. It's so simple it might even be confusing! No
matter.

The bundle of orientations of a square over a directed graph consists of a
copy of the fibre -- the set of orientations -- over each vertex of the
graph.

I'll start by reviewing the elementary properties of this bundle, just to
get everything straight in my head.

Some Elementary Stuff About A Fibre Bundle

A trivialisation of the bundle consists of a choice of one member from each
fibre to act as the 'base' member. Once we've made this choice, then every
member of a fibre can be identified with a corresponding element of C_4. The
base member is identified with the identity of C_4, 0, while the other
members are identified with the elements of C_4 that rotate the base member
onto them. We needn't worry about the distinction between local and global
trivialisations, since, on a graph, all local trivialisations can be
extended to global ones.

A connection on the bundle consists of a choice, for each edge of the graph,
of a bijective map from the fibre at the source of the edge to the fibre at
its target, such that the rotational relations are preserved. So, for
instance, if, in the source fibre, a is 90 degrees clockwise from a', then,
in the target fibre, the image of a should be 90 degrees clockwise from the
image of a'.

Given a trivialisation of the bundle, each source member gets labelled with
an element of C_4, and likewise each target member. Since relative
orientations are preserved, the value of a connection on a given edge
becomes identified with a member of C_4, which assigns the source labels
correctly to the target labels. And since C_4 is Abelian, we don't even need
to worry about whether the edge C_4 is acting by multiplication on the left
or by multiplication on the right.

If E is the number of edges, the space of all connections thus becomes
identified with the product of E copies of C_4.

We could pick a different trivialisation. At each vertex, the new labels are
related to the old labels by a rotation -- a member of C_4. Thus the change
of trivialisation -- the gauge transformation -- consists of a member of C_4
for each vertex.

Under a given gauge transformation, the labelling of a connection will also
change. For instance, suppose on a given edge, the original labelling
assigned a 180 degree rotation. If the source labels then rotate 90 degrees
clockwise, and the target labels rotate 180 degrees, then the edge label
gains an additional rotation of -90 degrees clockwise + 180 degrees = 90
degrees clockwise, giving a new label value of 90 degrees clockwise + 180
degrees = 270 degrees clockwise (90 degrees anticlockwise).

My head's spinning. I think that's enough reviewing for now.

Next, we want to prepare some representation theory to make use of later.
All our representations will be complex (but not complicated).

Complex Representations of C_4

The representation theory of C_4 is very simple. Since it's Abelian, all its
irreducible representations are 1-dimensional. Hence an irrep of C_4
consists simply of a complex number for each element of C_4, to multiply the
complex plane by. There are four representations, namely

0) The Trivial Representation

Member of C_4 Corresponding complex number
0 -> 1
1 -> 1
2 -> 1
3 -> 1

1) Another Representation

0 -> 1
1 -> i
2 -> -1
3 -> -i

2) The Alternating Representation

0 -> 1
1 -> -1
2 -> 1
3 -> -1

3) The Dual Of Another Representation

0 -> 1
1 -> -i
2 -> -1
3 -> i

I've labelled each representation by a number k (i.e. 0, 1, 2 or 3). The
choice of k is not accidental, since representation k sends x in C_4 to
exp(ikx pi/2)

OK, now we're ready to quantise.

Quantisation I: Constructing the Hilbert Space

Actually, we haven't got any dynamics, but quantising the kinematics is
quite enough to be going on with.

Our classical configuration space consists of the space of connections,
modulo gauge transformations. But let's forget about gauge transformations
for the moment, and just quantise the space of connections.

This means we are interested in L^2 on the space of connections. We have
identified the space of connections with

C_4 x C_4 x C_4 x ... x C_4 (E times)

where E is the number of edges. This is a set with 4^E points, so L^2 of the
configuration space is just the vector space of all complex-valued functions
on this set -- a 4^E-dimensional vector space -- together with a suitable
norm. One norm to use is kinda obvious. Under this norm, a possible
orthnormal basis is the one consisting of functions which take the value 1
on a single member of C_4 on a single edge, and take the value 0 everywhere
else.

But never mind that. We'll choose a slightly different norm, and here's how
we go about it. For reasons which will shortly become clearer, what we
really, really want to do at this point is Fourier analysis on C_4. We do
this by expressing our functions in terms of a different basis. Confining
ourselves for a moment to a single edge, this basis consists of the
following four functions on C_4, labelled by their wavenumber, k:

0) The Constant Function

Member of C_4 Corresponding complex number
0 -> 1
1 -> 1
2 -> 1
3 -> 1

1) Another Function

0 -> 1
1 -> i
2 -> -1
3 -> -i

2) An Alternating Function

0 -> 1
1 -> -1
2 -> 1
3 -> -1

3) The Complex Conjugate Of Another Function

0 -> 1
1 -> -i
2 -> -1
3 -> i

Ha! These functions exhibit, I think you will agree, a striking resemblance
to the list of irreducible representations I showed you earlier (right down
to the letter k used to symbolise their numerical labels!) This resemblance
is in fact extremely non-accidental.

Part of the match should not be entirely surprising. C_4 acts on itself by
multiplication (or 'addition' as people sometimes call it). Complex-valued
functions on C_4 thereby get carried along and end up as other
complex-valued functions. It is obvious that this operation preserves sums
and scalar products of functions, hence is a linear operation on the vector
space of functions, hence gives a representation of C_4 on the space, which
must be the direct sum of irreps (all of which are 1-dimensional for Abelian
groups like C_4).

What is not obvious is that there is exactly one copy of each irrep in the
direct sum. But it's true. It's true for other discrete Abelian groups too.

(WARNING! When we come to deal with non-Abelian groups a bit later, we'll
see that the above statement is no longer true -- though a generalisation of
it is. Further generalisations apply also to nondiscrete groups like Lie
groups.)

(An aside: the fact that the Fourier transform gives components with respect
to bases of 1-dimensional irreps of C_4, or, in other words, diagonalises
the rotation operators in C_4, considered as acting on the vector space of
functions on C_4, is really what makes the Fourier transform interesting and
useful. It is also what makes its big brother, the Fourier transform on R
interesting and useful: it diagonalises the translation operators, acting on
L^2(R), as well as their infinitesimal generator the derivative operator.
Likewise in R^n.)

By the way, you may have noticed that, though orthogonal, the above
functions are not normalised. We can rectify this either by dividing them by
2 or by using another norm, the same as the first but a quarter as big. I'll
take the latter course, as I can't be having with factors of 2 flying about
all over the place, pesky things, nearly as bad as minus signs.

And now for an important point: in addition to being the bases of irreps of
C_4 acting on itself by multiplication, the same functions are also bases of
irreps of C_4 acting on itself by inverse muliplication -- x: y -> y * x^-1.
But they are not necessarily the _same_ irreps in the two roles; rather,
they are interchanged with their duals. That is, 0 and 2 stay the same, but
1 and 3 swap places, as you can easily verify.

We're now in a good position to take a look at the action of the gauge group
on a single edge.

Quantisation II: Gauge Symmetry

The gauge group acts, if you recall, by rotating the fibres at each vertex
(or by rotating their labels). For a single edge, only two vertices matter:
the one at the start and the one at the end.

If a gauge transformation acts on the source vertex by a member of C_4, then
the transformation of the connection on the edge 'compensates' for this by
acting with the inverse member of C_4. On the other hand, if the gauge
transformation acts on the target vertex by a member of C_4, then the
transformation of the connection on the edge 'catches up' by acting with the
same member of C_4.

So much for the configuration space. Now we need to look at the effect of
the gauge group on the Hilbert space L^2(config space). As discussed
earlier, the effect of the gauge group is linear, so we only need to think
about its effect on a basis, and of course we choose the Fourier basis
tabulated above.

Each member of the basis is the basis of an irrep, and so under the gauge
group it simply gets multiplied by a complex number, namely under the action
of a member of x in C_4, the basis function of wavenumber k gets multiplied
by exp(ikx pi/2) under the action C_4, while under the inverse action, it
gets multiplied by exp(-ikx pi/2). The former (direct action) applies to
gauge transformations of the target vertex, and the latter (inverse action)
to gauge transformations of the source vertex. In general, there will be a
change by x at the source and by y at the target, leading to multiplication
by exp(ik(y-x) pi/2) on the edge.

Well, that's for a single edge, but graphs typically have more than one
edge, so we need to consider multiple edges.

Supposed there are two edges. Then the space of connections is C_4 x C_4,
with 16 members. The space of functions on C_4 x C_4 is 16-dimensional. We
can choose the sixteen members of a useful basis by doing the equivalent of
2-dimensional Fourier analysis on these functions.

For example, one member of our Fourier basis is the following function,
labelled 1 (x) 1

Edge 1
| 0 1 2 3
E --+------------------
d 0 | 1 i -1 -i
g 1 | i -1 -i 1
e 2 |-1 -i 1 i
2 3 |-i 1 i -1

another is the following, labelled 3 (x) 0

Edge 1
| 0 1 2 3
E --+------------------
d 0 | 1 -i -1 i
g 1 | 1 -i -1 i
e 2 | 1 -i -1 i
2 3 | 1 -i -1 i

And so on.

The effect of the gauge group on this is straightforward: each basis member
defines a 1-dimensional irrep, and to get the action of the gauge group on
it, we first multiply by the appropriate factor for edge 1, (rotating the
table horizontally) and then by the appropriate factor for edge 2 (rotating
the table vertically).

Similarly for E edges, we get a direct sum of 4^E irreps, and the effect of
the gauge group on each is multiplication by the product of the appropriate
factors for each edge.

Note however that each factor is an exponential, of the form

exp(ik_e (y_e - x_e) pi/2)

where k_e is the irrep label for edge e, y_e is the gauge action on the
target of e, and x_e is the gauge action on the source of e. To get the
product of the factors, we can instead first sum up the all the terms
k_e y_e for each edge, and subtract the sum of the terms k_e x_e for each
edge, and exponentiate in the last step.

Quantisation III: Modding out Gauge Transformations

We've got a basis for the Hilbert space over all connections, namely we do
E-dimensional Fourier analysis, and end up with a direct sum over all
combinations of edge-labellings by each of the Fourier basis functions or,
to put it another way, by all combinations of edge-labellings by each of the
irreps of C_4.

However, physically, we can't always distinguish between two connections. In
particular, we can't distinguish between two connections related by a gauge
transformation. In effect, we can't tell whether a given set of labels
denotes one connection under one trivialisation, or a different connection
under a different trivialisation. So we want to cut down our configuration
space to contain only physically distinguishable connections.

The way to do this is to observe that any function defined on such a space
is equivalent to a function defined on the space of _all_ connections but
which takes the same value on any set of mutually indistinguishable
connections, and vice versa. Thus, we can take the Hilbert space we have
constructed, and pick out those functions which are unaltered by any action
of the gauge group.

Now, if we use our E-dimensional Fourier basis functions, the action of any
member of the gauge group on any basis function is quite simple: it
multiplies it by a number, which will be one of 1, i, -1 and -i. So the
subspace we are looking for is generated by those basis functions which get
multiplied by 1 rather than by i, -1 or -i.

We need this to work for every member of the gauge group. However,
fortunately, although the gauge group may be rather large, it can be simply
decomposed into its action at separate vertices. Since any combination of
possible actions at different vertices may occur, we need the effect of any
action at any vertex to give a multiplication factor of 1 separately.

An action at a vertex by an element x of C_4 will affect only those edges
which either start or terminate at that vertex. As discussed above, the
multiplication factor associated with the element x will be

exp(ix (Sum k_e over incoming edges - sum k_e over outgoing edges) pi/2)

If this is to be 1, then

Sum k_e over incoming edges - sum k_e over outgoing edges

must be 0, or in other words we must have

Sum k_e over incoming edges = sum k_e over outgoing edges

All mod 4, of course.

So long as this condition holds at every vertex, the basis function is
gauge-invariant and acceptable.

Some Interpretation

We should think a little about what this means.

The k_e are labels of irreps of C_4, taking the values 0, 1, 2 or 3. They
are what are known as quantum numbers. In fact they can be considered spin
quantum numbers, analogous to the 'azimuthal' quantum numbers labelling reps
of plane rotations in U(1), for instance. To put it another way, they
measure the angular momentum associated with the rotations of the square.

(Note, however, that because C_4 is discrete, the quantum numbers lie in the
a compact group (and vice versa), namely Z/(4), aka C_4, rather than in Z.
Or, to be more precise, the Fourier functions form the compact group C_4
under pointwise multiplication, and this group behaviour is sort of
inherited by the corresponding quantum numbers.

The group of Fourier functions on a group, under pointwise multiplication,
is called the "dual group" of that group. In this case, C_4 is its own dual.
Discrete groups have compact duals, and vice versa. <Mumble mumble ...
discrete <--> compact ... mumble mumble ... Pontryagin Duality ... mumble
mumble ... Or, as someone once said, What a nice buzzing sound those words
make ... >)

Er ... where was I? ... ah, yes. So the k_e are quantised angular momenta;
so the condition means that the total incoming angular momentum _to_ any
vertex is equal to the total outgoing angular momentum _from_ that vertex.
In other words, the gauge symmetry implies a conservation law for angular
momentum: Emmy Noether rides again! Yay Emmy!

A Non-Abelian Gauge Group

I'm feeling guilty about exposing you only to the simple (in the non-
technical sense) C_4. Simplicity is great for understanding things, but it
can disguise important facts. So I'll chat a bit about a non-Abelian group,
just to fill out the picture a bit. I'll choose -- and why not? -- the
rotational symmetry group of the cube, aka S_4, the symmetric (i.e.
permutation) group on 4 elements.

(The four elements in question can be thought of as the diagonals of the
cube, between opposing vertices.)

Left! Right! Left! Right! ...

The basic description of the bundle of orientations of the cube is very
similar to that of the bundle of orientations of the square. ... set of
orientations of the cube at each vertex blah blah blah ... trivialisation is
a choice of base element blah blah blah ... connection is a map preserving
the action of S_4 blah blah blah ... gauge transformation a group element
for each vertex blah blah blah ... . The crucial difference is that whenever
we find ourselves multiplying group elements by other group elements, we
have to keep track of whether we're multiplying on the left or on the right.

So, having picked a trivialisation that identifies each orientation at each
vertex with a member of S_4, we want to act on the orientations with the
gauge group, S_4. To do this, we multiply on the left.

Our connection also acts as an element of S_4 on each edge, carrying to
source element to the target element. This also acts on the left:

g_e : g_s -> g_e * g_s

where g_e is the element of the connection associated with the edge e, and
g_s is an element of the bundle associated with the source vertex s. This
ensures that the internal relationship among elements at s is preserved --
if the relationship is represented by multiplication by elements of S_4 on
the _right_:

g_e : (g_s * g'_s) -> g_e * (g_s * g'_s) = (g_e * g_s) * g'_s.

Now we also want to know the effect of gauge transformations on the
connection.

We multiply by the inverse of the source gauge element, on the right; we
multiply by the target gauge element, on the left:

g_e -> g_t * g_e * g_s^-1

This ensures that the actual effect of the connection is preserved, and
gauge-transformed elements at the source get sent to their gauge-transformed
images at the target.

The other crucial and much more ramifying result of choosing a non-Abelian
gauge group is that not all of the representations are 1-dimensional any
more. We need to talk about this.

Representations of S_4

First I'll talk about the conjugacy classes of S_4. I do this partly because
of the theorem for finite groups which says that the number of irreps is
equal to the number of conjugacy classes, and partly because talking about
conjugacy classes gives some insight into the structure of S_4 as the
rotation group of the cube.

There are five conjugacy classes of S_4.

1) ()

The identity class. Contains only one element, the identity element. I don't
think there's any need to say more about this one.

2) (1234)

The cyclic permutations of all four elements. There are 6 of these
permutations. They correspond to the 90 degree rotations of the cube about
the axes passing through the centres of opposing faces.

3) (123)

Cyclic permutations of three elements, leaving the fourth one untouched.
There are eight of these permutations. They correspond to the 120 degree
rotations of the cube about the axes passing through the centres of
diametrically opposed vertices.

4) (12)

Simple transpositions. There are six of these. They correspond to the 180
degree rotations of the cube about the axes passing through the centres of
diametrically opposed edges.

5) (12)(34)

Paired transpositions. There are three of these. They correspond to the 180
degree rotations of the cube about the axes through the centres of opposing
faces.

These descriptions should make it easy to see how S_4 gets embedded in
SO(3). SO(3) is doubly covered by SU(2), and we could use this fact to give
a corresponding double cover of S_4, namely the 48-member "binary octahedral
group".

The paired transpositions, (12)(34), (13)(24) and (14)(23), together with
the identity, (), form a 4-element subgroup of S_4, isomorphic to the Klein
group V_4, aka Z_2 x Z_2. If we lift this group to its double cover in
SU(2), we get the eight-member "quaternion group" Q_8, consisting of the
quaternions +/-1, +/-i, +/-j, +/-k, under the usual quaternion
multiplication law.

The group generated by the paired transpositions is precisely the subgroup
of S_4 that sends each pair of opposing faces to itself (though possibly
switching some of the actual faces). If we quotient S_4 by this subgroup, we
get S_3, the symmetric group on 3 elements -- the 3 elements in question
being precisely the pairs of opposite faces of the cube. This turns
representations of S_3 into representations of S_4.

Oh yes ... so what actually _are_ the irreps of S_4?

OK, first there's the Trivial Representation, a 1-D irrep on which every
group element acts as the identity. Let's call it U.

Then there's the Alternating Representation, another 1-D irrep on which even
permutations act as the identity while odd permutations multiply everything
by -1. Let's call this one U'.

Then there's the 3-dimensional Standard Representation.

Every symmetric group S_n has an n-dimensional representation called the
Permutation Representation: pick a basis, and permute the basis elements
under the usual action of S_n on n elements.

In the Permutation Representation, the 1-d subspace given by the diagonal
line x_1 = x_2 = x_3 = ... = x_n is left alone, making the Trivial
Representation a subrepresention of the Permutation Representation, which is
therefore not irreducible. Howver, if we quotient out by this
subrepresentation, we get an n-1 dimensional representation, which is
irreducible. It's called the Standard Representation.

So S_4 has a 3-dimensional irrep in the form of the Standard Representation.
Let's call this V.

If we tensor V with the Alternating Representation U', we get another irrep,
which we'll call V'. Even elements of S_4 act on it as they would on V,
while odd elements act as minus what they would on V. Since V is
3-dimensional and U' is 1-dimensional, V' must also be 3-dimensional.

That's four irreps. There must be a fifth, let's call it W. Some fiddling
with the character table (which I sadly don't have time to tell you about)
reveals that it's 2-dimensional. In fact out it turns out to be nothing
other than the Standard Representation of S_3, where the S_3 in question is
the permutation group of the pairs of opposite faces of the cube which I
mentioned above.

And that's it, folks. U, U', V, V' and W. Each of these representations is
self-dual.

Now, just like the last time round, the next step in quantising the space of
connections modulo gauge transformations on the bundle of orientations of a
cube over a graph is to construct L^2 of the space of connections, putting
off the gauge transformations for a moment. Then we want to construct a nice
basis of this Hilbert space, in which the action of the gauge group will
appear particularly perspicuous. For this, we want to do something like
Fourier transformations. But only something like, because the
non-Abelianness of our group S_4 will make things more complicated than last
time.

Like Fourier Transforms, Only More So

The space of complex-valued functions on S_4 is of course a vector space.
S_4 acts on itself by multiplication on the left, and thereby shuffles the
functions onto one another in a linear way. S_4 also acts on itself by
inverse multiplication on the right, and this is also linear. So the
24-dimensional space of functions over S_4 is a representation of S_4 in two
different ways. We want to decompose this as a direct sum, and a very
powerful theorem enables us to do this very easily.

We take each irrep of S_4, viz. U, U', V, V', and W. We tensor it with its
dual -- which is particularly simple in this case because all the irreps are
self-dual:

U (x) U, U' (x) U', V (x) V, V' (x) V', and W (x) W.

Now we direct sum them:

U (x) U + U' (x) U' + V (x) V + V' (x) V' + W (x) W.

And that's isomorphic, as a representation of S_4 x S_4, to the space of
functions on S_4, where the first factor corresponds to multiplication on
the left, and the second to inverse multiplication (hence the dual rep) on
the right.

I ought to work this out in detail so we can see how it goes. However, I
can't face doing this for 24 functions on a 24-element space. So instead
I'll cheat, and work it out for the much simpler (but still non-Abelian)
group S_3, which has only 6 members.

There are 3 conjugacy classes of S_3

() -- the identity class, with 1 member
(12) -- the transpositions, with 3 members
(123) -- the triplets, with 2 members.

Hence there are three irreps. These are

U, the Trivial Representation, which is 1-dimensional and sends everything
to itself.

U', the Alternating Representation, which is also 1-dimensional and on which
the identity and the triplets act as the identity, while the transpositions
multiply by -1.

V, the Standard representation, which is 2-dimensional.

To see how the Standard Representation goes, let's stick the following basis
on it:

a (1, w, v)
b (1, v, w)

where w (really omega) is the cube root of 1 with positive imaginary part,
and I'm writing v instead of w^2 -- this isn't standard, but I'm trying to
simplify the appearance of things in ASCII.

(If we use the inner product where

<(x1, x2, x3), (y1, y2, y3)> = (x1* y1 + x2* y2 + x3* y3)/3

with * denoting complex conjugation, then the above basis is orthonormal).

S_3 acts on a and b by permuting the three components, giving the following
matrices, as you can verify yourself:

(1 0)
() (0 1)

(0 v)
(12) (w 0)

(0 1)
(23) (1 0)

(0 w)
(13) (v 0)

(v 0)
(123) (0 w)

(w 0)
(132) (0 v)

Now the six dimensional space of complex-valued functions on S_3 can be
given the following basis:


() (12) (23) (13) (123) (132)

1 1 1 1 1 1 1
1' 1 -1 -1 -1 1 1
a1 1 1 v w w v
b1 v v 1 w w 1
a2 v w 1 v 1 w
b2 1 w v 1 v w

Under the action of S_3 on itself by multiplication on the left, or by
inverse multiplication on the right, the function labelled 1 clearly
transforms to itself, so is a basis of the rep U (x) U. Likewise, the
function labelled 1' is a basis of the rep U' (x) U'.

The remaining four functions a1, b1, a2 and b2 span a 4-dimensional space.
By multiplication on the left, a1 and b1 form the basis of a Standard Rep of
S_3, and so do a2 and b2. In fact, in these bases, the matrices take exactly
the same form as they do with the basis (a, b) shown above.

Under inverse multiplication on the right, (a1, a2) form the basis for a
Standard Rep of S_3, and so do (b1, b2). The corresponding matrices are:

(1 0)
() (0 1)

(1 -1)
(12) (0 -1)

(0 1)
(23) (1 0)

(-1 0)
(13) (-1 1)

(-1 1)
(123) (-1 0)

(0 -1)
(132) (1 -1)

as you can check yourself.

These matrices look different from the ones above, but this is simply due to
a change of basis. Conjugating them with the matrix

(1 w)
(w 1)

transforms them into the earlier forms.

So we have a rep V (x) V, with the first factor representing multiplication
on the left, and the second factor representing inverse multiplication on
the right. Actually, the second factor should really be the dual of V,
(denoted V*) -- as we saw earlier with representations of C_4. But in this
case, V and V* are the same, so this aspect is unfortunately disguised.

Finally, the three reps U (x) U, U' (x) U' and V (x) V, direct-summed, give
the behaviour of the whole function space under the action of S_4 acting by
multiplication on the left, or by inverse multiplication on the right. This
is particularly convenient because, of course, the gauge group acting on the
target vertex of an edge induces an action on the connection at that edge by
multiplication on the left; while the gauge group acting on the source
vertex of an edge induces an action on the connection at that edge by
inverse multiplication on the right. So we can keep track of the action of
the entire gauge group on each edge.

Gauge Transformations

As before, with the gauge group C_4^E, we want to pick out only those
functions of the space of connections which are unchanged by gauge
transformations. As before, this means we need to ensure that we pick
functions that are unchanged by the action of a gauge transformation at any
single vertex. This affects precisely those edges which begin or terminate
at that vertex.

As before, we can do a sort of E-dimensional Fourier analysis on the E edges
to get basis functions on the space of connections. The fact that our group
is now non-Abelian, however, introduces further complications.

Last time, each edge carried a 1-dimensional irrep corresponding to a
Fourier basis function, and E-dimensional Fourier analysis involved
tensoring those 1-dimensional irreps together to produce other 1-dimensional
irreps.

Now, however, not all of our irreps are 1-dimensional, so if we tensor them
together we get multi-dimensional representations which are not necessarily
irreducible. In short, some of our Fourier basis functions transform into
linear combinations of one another in a complicated way under the action of
the gauge group.

This must stop!

What we want is to pick out those functions on the space of connections
which transform into _themselves_ under all gauge transformations: that is,
which act as (a basis of) the trivial representation. One example is the
constant function, which is symbolised by putting the trivial representation
on every edge. However, the trivial representation also appears embedded as
a subrepresentation within other (reducible) representations, and we have to
find every instance of it.

Here's how we go about it.

To get subspaces which act as the trivial representation under all gauge
transformations, we need them to act as the trivial representation under a
gauge transformation at each vertex. Now, for a given labelling of the edges
by irreducible representations, each vertex contributes the tensor product
of the irreps labelling the incoming edges and the duals of the irreps
labelling the outgoing edges. (In the case of S_3 and S_4 and, I think, S_n
for every n, all irreps happen to be self-dual).

We need to decompose this tensor product into a direct sum of irreps, and
find the trivial irreps among them. Looking at the character table, which I
wish I had time to explain, but I don't, we can easily discover the
following direct sum decomposition of tensor products of irreps of S_4:

(x) || U | U' | V | V' | W |
=====++=====+=====+===========+===========+=========|
U || U | U' | V | V' | W |
-----++-----+-----+-----------+-----------+---------|
U' || U' | U | V' | V' | W |
-----++-----+-----+-----------+-----------+---------|
V || V | V' | U+V+V'+W | U'+V+V'+W | V+V' |
-----++-----+-----+-----------+-----------+---------|
V' || V' | V | U'+V+V'+W | U+V+V'+W | V+V' |
-----++-----+-----+-----------+-----------+---------|
W || W | W | V+V' | V+V' | U+U'+W |
-----------------------------------------------------

I ought to give an example to show how this works, but as before, I'm going
to cheat and use S_3, which is simpler. The equivalent table for S_3 looks
like this:

(x) || U | U' | V |
=====++=====+=====+=========|
U || U | U' | V |
-----++-----+-----+---------|
U' || U' | U | V |
-----++-----+-----+---------|
V || V | V | U+U'+V |
-----------------------------

The only interesting one is V (x) V.

Taking the basis (a, b) which we constructed much further up when we were
looking at functions, we get a basis for the tensor product (aa, ab, ba, bb)
which we now proceed to subject to the action of S_3, multiplying by both
factors of 1, w or v wherever they occur, giving:

| aa ab ba bb
------+--------------------------------
() | aa ab ba bb
(12) | v bb ba ab w aa
(23) | bb ba ab aa
(13) | w bb ba ab v aa
(123) | w aa ab ba v bb
(132) | v aa ab ba w bb

Inspection reveals that
(ab + ba) acts as a basis of the the Trivial Representation
(ab - ba) acts as a basis of the the Alternating Representation
(bb, aa) acts as a basis of the the Standard Representation

so we get our direct sum decomposition into U + U' + V.

(We ought to divide the first two by sqrt(2) to normalise them.)

There is only one copy of U in V (x) V, but in e.g. V (x) V (x) V (x) V
there are three copies (coming from U (x) U, U' (x) U' and V (x) V if we
treat this as the square of V (x) V). When picking out gauge-invariant
functions on the space of connections, we need to remember to pick up all
three, or our basis will be incomplete.

There is one final rearrangement to make. We've got a map -- in fact a
homomorphism of representations -- from the tensor product of incoming reps
with the tensor products of the duals of the outgoing reps to the Trivial
Representation. However, the miracle of duality means that this is
equivalent to a map from the tensor products of incoming reps to the tensor
products of outgoing reps (_not_ their duals) which commutes with the group
action. Such an map between reps is called an intertwiner. The restriction
that incoming reps must have an intertwiner to outgoing reps is the
non-Abelian equivalent of the conservation laws that says total incoming
<conserved quantity> must be equal to total outgoing <conserved quantity>.

But I don't think I can face explaining how that works.

Jeepers, 44 pages of notes to write a usenet post. I must be nuts. <Checks>
Yes, I _am_ nuts. That wasn't quite the light romp through spin networks I
thought it was going to be, more like _Three Lectures on Elementary
Representation Theory, Delivered at Breakneck Speed_, or maybe, given the
abundance of calculations and the paucity of proofs, that should be _Quantum
Gravity: The Engineering Approach_.

Ah well, if you are still reading and alive, I hope you have learned
something. And if not -- heigh, ho, at least _I've_ learned something.

Tim

Gerard Westendorp

unread,
Mar 12, 2004, 6:32:10 AM3/12/04
to
Tim S wrote:

[..]

> OK, now we're ready to quantise.

[..]

> This means we are interested in L^2 on the space of connections. We
> have identified the space of connections with
>
> C_4 x C_4 x C_4 x ... x C_4 (E times)
>
> where E is the number of edges. This is a set with 4^E points, so L^2
> of the configuration space is just the vector space of all
> complex-valued functions on this set -- a 4^E-dimensional
> vector space -- together with a suitable norm.

How about things like commutation rules? How do they fit unto this
scheme?

[..]

> Jeepers, 44 pages of notes to write a usenet post.

That could be why this thread is a bit slow: it is quantized.

Gerard

0 new messages