After John Baez wrote about Clifford bundles in TWF211 and Peter Woit commented to which Lubos commeted let me reiterate my favourite property of Clifford algebras: Fierz identities.
Remember that given a vector space V with a non-degenerate bilinear form you can define a Clifford C(V) algebra by saying that there is a linear map gamma from V to C(V) such that
{gamma(v),gamma(w)}=2 b(v,w) 1 and such that any other such map factors. Physicists usually denote the generators of C(V) by gamma(i) where i labels an orthonormal basis of V.
The important property is that as a vector space, there is a bijection between C(V) and the exteriour algebra over V. That is if V is the tangent vector space over some point of a manifold, you can view any element of C(V) as a differential form. Fierz identities tell you how this map looks like concretely.
If you work in higher dimensional theories and care about fermions you probably spend a significant amount of your time working out the details of this in specific cases.
Now, what is this relation of Clifford algebras and susy that Peter was refering to? Lubos guessed that this was just the fact that the supercharge is a spinor and thus a module over a clifford algebra. But there is a much more interesting relation and that involves division algebras.
Remember, there are four division algebras: The reals, the complex number, quaternions and octonions. They have a number of defining properties, and amongst them is alternativity: Not all of them are associative and you can define
t(a,b,c) = a(bc)-(ab)c
Alternativity tells you that t is anti-symmetric under swapping two of a,b,c. Note that this is quadratic in the structure constants of the division algebra as there are two products envolved.
Furthermore, you should know that there is a nice representation of SO(9,1) gamma matrices (generators of the above Clifford algebra): For gamma^+ and gamma^- you use the usual form for Weyl spinors and the remaining eight are block offdiagional. The blocks are given by
c^i_jk = c_ijk if all 1<=i,j,k<=7
c^8_jk = delta_jk if 1<=jk<=7 and cyclic
and 0 if there are two 8's.
where the c_ijk are the structure constants of the octonions that is summarized by the triangle diagram. There are similar expressions for the gammas for SO(5,1) and SO(3,1) and SO(2,1) related to the other division algebras.
Now you open GSW 1 in the appendix to chapter 4 where super YM is discussed. There, it is argued that you would guess that the susy variation of the gauge field is proportional to the spinor and that the susy variation of the spinor is proportional to the field strength (using gamma matrices to soak up the indices). Then they guess an action of the form
L= F^2 + psi D psi
where D is the covariant Dirac operator. Its relatively easy to adjust the relative factor so the variations of the two terms transform into each other. But there is one term that is special: If you vary the gauge field within D you get a term with three psi's and that is the only such term. So it has to vanish by itself. It also comes with two gamma matrices. Here, the above mentioned Fierz identities come to help: They tell you that exactly in 3,4,6 and 10 dimensions this combination of gamma's vanishes. This is why there is sYM in those dimensions.
But now you can plug in the above representation of gamma matrices. The candidate expression is quadratic in gamma's. Guess what the Fierz identity corresponds to in terms of the division algebras!
So, the working of susy theories rely on Fierz identities and those are deeply encoded in the structure of the various Clifford algebras.
NB that the little group in 10D is SO(8) which has triality (the Dynkin diagram is the star): The vector and the two spinor representations are equivalent. This was also needed in the above representation (i didn't use different types for the three different indices of the gamma's). And this 8 is of course related to the 8 in Bott periodicity. But this is just because with Clifford algebras (and other "special" algebraic structures like division algebras and exceptional Lie groups), everything is related. Fnord.
Monday, March 14, 2005
Thursday, March 10, 2005
Deconstruction Help Sought
This is a comment to Lubos' article
on deconstruction. As it got quite long and I would really like to have an answer, I post it here as well:
Maybe this is the right place to ask a question about deconstruction that I have had for quite some time: IIRC, deconstruction (at least for the 6D (2,0) theory relevant for M5 and NS5 branes) starts by looking at D3 branes in a A_N orbifold
which can be written as C^2/Z_N. This gives you the A_N quiver theory.
Then you take a limit in which you take N to infinity while moving away from the orbifold sigularity. The resulting formulas look to me pretty much like you modded out Z from C^2 to end up with a cylinder R^3 x S^1.
Furthermore, what used to be the quiver theory looks like Wati Taylor's version of T-duality in M(atrix) (or any other D-brane gauge theory. In the classic
hep-th/9611042
he describes how you grow an extra dimension from a "SU(N x inifinity)" theory. This should give you a D4 in IIA and you
can apply the usual M-Magic to turn it into a M5 or NS5. So what is more in deconstruction that M(atrix)-T-duality?
Some people I asked this have suggested that the advantage is that you express everything in terms of a renormalizable 4D theory. But this is strictly true only for finite N.
If that were the case you could look at any 6D theory and compactify it on a 2 torus of finite size. You fourier decompose all fields in the compact direction. The fourier components are formally fields in 4D and thus have renormalizable couplings (for a gauge theory say). Of course, non-renormalizabiity comes back when you realize that you have an infinite number of component fields and the sum over all the components will diverge.
Note that I am not trying to argue that all large N limits are ill defined (that would be stupid), I just say that the argument as I understand it sounds too simple to me.
PS: blogspot.com is sicker than ever. How long will it take me to get this posted?
on deconstruction. As it got quite long and I would really like to have an answer, I post it here as well:
Maybe this is the right place to ask a question about deconstruction that I have had for quite some time: IIRC, deconstruction (at least for the 6D (2,0) theory relevant for M5 and NS5 branes) starts by looking at D3 branes in a A_N orbifold
which can be written as C^2/Z_N. This gives you the A_N quiver theory.
Then you take a limit in which you take N to infinity while moving away from the orbifold sigularity. The resulting formulas look to me pretty much like you modded out Z from C^2 to end up with a cylinder R^3 x S^1.
Furthermore, what used to be the quiver theory looks like Wati Taylor's version of T-duality in M(atrix) (or any other D-brane gauge theory. In the classic
hep-th/9611042
he describes how you grow an extra dimension from a "SU(N x inifinity)" theory. This should give you a D4 in IIA and you
can apply the usual M-Magic to turn it into a M5 or NS5. So what is more in deconstruction that M(atrix)-T-duality?
Some people I asked this have suggested that the advantage is that you express everything in terms of a renormalizable 4D theory. But this is strictly true only for finite N.
If that were the case you could look at any 6D theory and compactify it on a 2 torus of finite size. You fourier decompose all fields in the compact direction. The fourier components are formally fields in 4D and thus have renormalizable couplings (for a gauge theory say). Of course, non-renormalizabiity comes back when you realize that you have an infinite number of component fields and the sum over all the components will diverge.
Note that I am not trying to argue that all large N limits are ill defined (that would be stupid), I just say that the argument as I understand it sounds too simple to me.
PS: blogspot.com is sicker than ever. How long will it take me to get this posted?
Monday, March 07, 2005
Ostwald ripening and separation of scales
The physics group at IUB is quite small so the number of exciting seminars is relatively limited. But the mathematicians have a weekly colloquium that is usually worth attending. I should have started earlier reporting on this, then I would have been talking about generalized theta functions in number and string theory amongst other things but this was the past and today we had Barbara Niethammer who talked about Ostwald ripening.
I have to admit, I had no idea what this is but I learned that this is a theory of crystal formation in a solution. You first start out with many small crystals but eventually the bigger ones will grow on the expense of the smaller ones. The big question is whether there is self similar behaviour.
The model is quite simple: You start with a number of spherical crystals at random fixed positions and with random sizes. Then (on very short time scales) there is diffusion in the solution. This can be described by a local chemical potential u. The diffusion makes sure u is harmonic in the bulk and at the surface of the spherical crystals it is given by 1/radius of the crystal. You can easily solve this Dirichlet problem. But now you turn on the growth/shrinking of the crystals by imposing the time derivative of the crystal volume to be given by the integral over the flux of gradient(u) over the boundary of the shere. This induces a long range interaction between the crystals and it turns out that the sum of the crystal volumes is constant in time.
Now you assume that the crystals are sparse and you can make a mean field approximation: Each crystal just sees u with boundary conditions given by the average u which turns out to be 1/average radius. Effectively, this is now a system of coupled ODE's, with the radii of the crystals and the average u as time dependant variables.
But there is a second step of idealization in which you only consider the average u and the number density of crystals of radius r: f(r,t). The evolution becomes now a transport equation for this f and you can study its late time behaviour starting from some initial data.
Of course r has to be positive and crystals of radius 0 drop out of the system. Studying the transport equation you find that it is basically given by a stretching in r then imposing the cut-off and then renormalizing (as the total volume was conserved). And indeed, this approaches static distributions (after scaling out a trivial t^1/3 growth). At least for nice inital data. Today, only initial data with compact support was discussed and it turned out that the form of the asymptoptic solution only depended on the form of the initial data at the upper end of the support of the initial data: This is can easily be understood from the strech-cut off-renormalize form of the transport equation. For example, if the distribution ends with a delta function peak, that is there is at least one largest crystal, all the volume eventually ends up in these larges crystals. If the inital distribution goes to zero with some power at its upper end, the static solution is characteristic of that power. And finally if there is no largest power (for example because it is given by a power series in 1/(r-r_max) ) then there is no asymptotic solution. There were also pictures which looked quite interesting.
The reason why I talk about this is, that I think that this is a nice example of a non-renormalizable system. It's late time behaviour depends on the infinitely small scales (at the upper end of the distribution function): So in the long run (late times) the behaviour is completely determined by the UV. (Well, one could say, at late times the second approximation of describing a large number of crystals in terms of a distribution function breaks down. But one could forget about the crystals and consider the distribution function theory as the microscopic one).
If the real theory of everything had such a behaviour, we would just have to wait long enough and find out how the world looks at infinitely small scales. Maybe waiting for larger and larger accelerators is a similar endeavour.
I have to admit, I had no idea what this is but I learned that this is a theory of crystal formation in a solution. You first start out with many small crystals but eventually the bigger ones will grow on the expense of the smaller ones. The big question is whether there is self similar behaviour.
The model is quite simple: You start with a number of spherical crystals at random fixed positions and with random sizes. Then (on very short time scales) there is diffusion in the solution. This can be described by a local chemical potential u. The diffusion makes sure u is harmonic in the bulk and at the surface of the spherical crystals it is given by 1/radius of the crystal. You can easily solve this Dirichlet problem. But now you turn on the growth/shrinking of the crystals by imposing the time derivative of the crystal volume to be given by the integral over the flux of gradient(u) over the boundary of the shere. This induces a long range interaction between the crystals and it turns out that the sum of the crystal volumes is constant in time.
Now you assume that the crystals are sparse and you can make a mean field approximation: Each crystal just sees u with boundary conditions given by the average u which turns out to be 1/average radius. Effectively, this is now a system of coupled ODE's, with the radii of the crystals and the average u as time dependant variables.
But there is a second step of idealization in which you only consider the average u and the number density of crystals of radius r: f(r,t). The evolution becomes now a transport equation for this f and you can study its late time behaviour starting from some initial data.
Of course r has to be positive and crystals of radius 0 drop out of the system. Studying the transport equation you find that it is basically given by a stretching in r then imposing the cut-off and then renormalizing (as the total volume was conserved). And indeed, this approaches static distributions (after scaling out a trivial t^1/3 growth). At least for nice inital data. Today, only initial data with compact support was discussed and it turned out that the form of the asymptoptic solution only depended on the form of the initial data at the upper end of the support of the initial data: This is can easily be understood from the strech-cut off-renormalize form of the transport equation. For example, if the distribution ends with a delta function peak, that is there is at least one largest crystal, all the volume eventually ends up in these larges crystals. If the inital distribution goes to zero with some power at its upper end, the static solution is characteristic of that power. And finally if there is no largest power (for example because it is given by a power series in 1/(r-r_max) ) then there is no asymptotic solution. There were also pictures which looked quite interesting.
The reason why I talk about this is, that I think that this is a nice example of a non-renormalizable system. It's late time behaviour depends on the infinitely small scales (at the upper end of the distribution function): So in the long run (late times) the behaviour is completely determined by the UV. (Well, one could say, at late times the second approximation of describing a large number of crystals in terms of a distribution function breaks down. But one could forget about the crystals and consider the distribution function theory as the microscopic one).
If the real theory of everything had such a behaviour, we would just have to wait long enough and find out how the world looks at infinitely small scales. Maybe waiting for larger and larger accelerators is a similar endeavour.
Subscribe to:
Posts (Atom)