Since early childhood I hate cleaning up. Now that I am a bit older, however, I sometimes realise it has to be done, especially of other people are involved (visitors, flat-mates, etc). See however this and this.
And yes, when I'm doing DIY or installing devices/cables/networks etc I am usually satisfied with the "great, it works (ahem, at least in principle)" stage.
But today, (via Terry Tao's blog) I came across The Planarity Game, which might have changed my attitude towards tidying up... Have fun!
Wednesday, September 19, 2007
Tuesday, September 18, 2007
Not quite infinite
Lubos has a memo where he discusses how physicists make (finite) sense of divergent sums like 1+10+100+1000+... or 1+2+3+4+5+... . The last is, as string theorists know, of course -1/12 as for example explained in GSW. Their trick is to read that sum as the value at s=-1 of and define that value via the analytic continuation of the given expression which is well defined only for real part of s>1.
Alternatively, he regularises as . Then, in an obscure analogy with minimal subtraction throws away the divergent term and takes the finite remainder as the physical value.
He justifies this by claiming agreement with experiment (here in the case of a Casimir force). This, I think, however, is a bit too weak. If you rely on arguments like this it is unclear how far they take you when you want to apply them to new problems where you do not yet know the answer. Of course, it is good practice for physicists to take calculational short-cuts. But you should always be aware that you are doing this and it feels much better if you can say "This is a bit dodgy, I know, and if you really insist we could actually come up with a rigorous argument that gives the same result.", i.e. if you have a justification in your sleeve for what you are doing.
Most of the time, when in a physics calculation you encounter an infinity that should not be there (of course, often "infinity" is just the correct result, questions like how much energy I have to put into the acceleration of an electron to bring it up to the speed of light? come to my mind), you are actually asking the wrong question. This could for example be because you made an idealisation that is not physically justified.
Some examples come to my mind: The 1+2+3+... sum arises when you try to naively compute the commutator of two Virasoro generators L_n for the free boson (the X fields on the string world sheet). There, L_n is given as an infinite sum over bilinears in a_k's, the modes of X. In the commutator, each summand gives a constant from operator ordering and when you sum up these constants you face the sum 1+2+3+...
Once you have such an expression, you can of course regularise it. But you should be suspicious that it is actually meaningful what you do. For example, it could be that you can come up with two regularisations that give different finite results. In that case you should better have an argument to decide which is the better one.
Such an argument could be a way to realise that the infinity is unphysical in the first place: In the Virasoro example, one should remember that the L_n stand for transformations of the states rather than observables themselves (outer vs. inner transformations of the observable algebra). Thus you should always apply them to states. But for a state that is a finite linear combination of excitations of the Fock vacuum there are always only a finite number of terms in the sum for the L_n that do not annihilate the state. Thus, for each such state the sum is actually finite. Thus the infinite sum is an illusion and if you take a bit more care about which terms actually contribute you find a result equivalent to the -1/12 value. This calculation is the one you should have actually done but the zeta function version is of course much faster.
My problem with the zeta function version is that to me (and to all people I have asked so far) it looks accidental: I have no expansion of the argument that connects it to the rigorous calculation. From the Virasoro algebra perspective it is very unnatural to introduce s as at least I know of no way to do the calculation with L_n and a_k with a free parameter s.
Another example are the infinities that arise in Feynman diagrams. Those arise when you do integrals over all momenta p. There are of course the usual tricks to avoid these infinities. But the reason they work is that the integral over all p is unphysical: For very large p, your quantum field theory is no longer the correct description and you should include quantum gravity effects or similar things. You should only integrate p up the scale where these other effects kick in and then do a proper computation that includes those effects. Again, the infinity disappears.
If you have a renormalisable theory you are especially lucky: There you don't really have to know the details of that high energy theory, you can subsume them into a proper redefinition of your coupling constants.
A similar thing can be seen in fluid dynamics: The Navier-Stokes equation has singular solutions much like Einstein's equations lead to singularities. So what shall we do with for example infinite pressure? Well, the answer is simple: The Navier-Stokes equation applies to a fluid. But the fluid equations are only an approximation valid at macroscopic scales. If you look at small scales you find individual water molecules and this discreteness is what saves you actually encountering infinite values.
There is an approach to perturbative QFT developed by Epstein and Glaser and explained for example in this book that demonstrates that the usual infinities arise only because you have not been careful enough earlier in your calculation.
There, the idea is that your field operators are actually operator valued distributions and that you cannot always multiply distributions. Sometimes you can, if their singularities (the places where they are not a function but really a distribution) are in different places or in different directions (in a precise sense) but in general you cannot.
The typical situation is that what you want to define (for example delta(x)^2) is still defined for a subset of your test functions. For example delta(x)^2 is well defined for test functions that vanish in a neighbourhood of 0. So you start with a distribution defined only for those test functions. Then, you want to extend that definition to all test-functions, even those that are finite around 0. It turns out that if you restrict the degree of divergence (the maximum number of derivatives acting on delta, this will later turn out to be related to the superficial scaling dimension) to be below some value, there is a finite dimensional solution space to this extension problem. In the case of phi^4 theory for example the two point distribution is fixed up to a multiple of delta(x) and a multiple of the d'Alambertian of delta(x), the solution space is two dimensional (if Lorentz invariance is taken into account). The two coefficients have to be fixed experimentally and of course are nothing but mass and wave function renormalisation. In this approach the counter terms are nothing but ambiguities of an extension problem of distributions.
I has been shown in highly technical papers, that this procedure is equivalent to BPHZ regularization and dimensional regularisation and thus it's save to use the physicist's short-cuts. But it's good to know that the infinities that one cures could have been avoided in the first place.
My last example is of slightly different flavour: Recently, I have met a number of mathematical physicists (i.e. mathematicians) that work on very complicated theorems about what they call stability of matter. What they are looking at is the quantum mechanics of molecules in terms of a Hamiltonian that includes a kinetic term for electrons and Coulomb potentials for electron-electron and electron-nucleus interactions. The position of the nuclei are external (classical) parameters and usually you minimise them with respect to the energy. What you want to show is that the spectrum of this Hamiltonian is bounded from below. This is highly non-trivial as the Coulomb potential itself alone is not bounded from below (-1/r becomes arbitrarily negative) and you have to balance it with the kinetic term. Physically, you want to show that you cannot gain an infinite amount of energy by throwing an electron into the nucleus.
Mathematically, this is a problem about complicated PDE's and people have made progress using very sophisticated tools. What is not clear to me is if this question is really physical: It could well be that it arises from an over-simplification: The nuclei are not point-like and thus the true charge distribution is not singular. Thus the physical potential is not unbounded from below. In addition, if you are worried about high energies (as would be around if the electron fell into a nucleus) the Schrödinger equation would no longer be valid and would have to be replaced with a Dirac equation and then of course the electro-magnetic interaction should no longer be treated classically and a proper QED calculation should be done. Thus if you are worried about what happens to the electron close to the nucleus in Schrödinger theory, you are asking an unphysical question. What still could be a valid result is that you show (and that might look very similar to a stability result) is that you don't really get out of the area of applicability of your theory as the kinetic term prevents the electrons from spending too much time very close to the nucleus (classically speaking).
What is shared by all these examples, is that some calculation of a physically finite property encounters infinities that have to be treated and I tried to show that those typically arise because earlier in your calculation you have not been careful and stretched an approximation beyond its validity. If you would have taken that into account there wouldn't have been an infinity but possible a much more complicated calculation. And in lucky cases (similar to the renormalisable situation) you can get away with ignoring these complications. However you can sleep much better if you know that there would have been another calculation without infinities.
Update: I have just found a very nice text by Terry Tao on a similar subject to "knowing there is a rigorous version somewhere".
Alternatively, he regularises as . Then, in an obscure analogy with minimal subtraction throws away the divergent term and takes the finite remainder as the physical value.
He justifies this by claiming agreement with experiment (here in the case of a Casimir force). This, I think, however, is a bit too weak. If you rely on arguments like this it is unclear how far they take you when you want to apply them to new problems where you do not yet know the answer. Of course, it is good practice for physicists to take calculational short-cuts. But you should always be aware that you are doing this and it feels much better if you can say "This is a bit dodgy, I know, and if you really insist we could actually come up with a rigorous argument that gives the same result.", i.e. if you have a justification in your sleeve for what you are doing.
Most of the time, when in a physics calculation you encounter an infinity that should not be there (of course, often "infinity" is just the correct result, questions like how much energy I have to put into the acceleration of an electron to bring it up to the speed of light? come to my mind), you are actually asking the wrong question. This could for example be because you made an idealisation that is not physically justified.
Some examples come to my mind: The 1+2+3+... sum arises when you try to naively compute the commutator of two Virasoro generators L_n for the free boson (the X fields on the string world sheet). There, L_n is given as an infinite sum over bilinears in a_k's, the modes of X. In the commutator, each summand gives a constant from operator ordering and when you sum up these constants you face the sum 1+2+3+...
Once you have such an expression, you can of course regularise it. But you should be suspicious that it is actually meaningful what you do. For example, it could be that you can come up with two regularisations that give different finite results. In that case you should better have an argument to decide which is the better one.
Such an argument could be a way to realise that the infinity is unphysical in the first place: In the Virasoro example, one should remember that the L_n stand for transformations of the states rather than observables themselves (outer vs. inner transformations of the observable algebra). Thus you should always apply them to states. But for a state that is a finite linear combination of excitations of the Fock vacuum there are always only a finite number of terms in the sum for the L_n that do not annihilate the state. Thus, for each such state the sum is actually finite. Thus the infinite sum is an illusion and if you take a bit more care about which terms actually contribute you find a result equivalent to the -1/12 value. This calculation is the one you should have actually done but the zeta function version is of course much faster.
My problem with the zeta function version is that to me (and to all people I have asked so far) it looks accidental: I have no expansion of the argument that connects it to the rigorous calculation. From the Virasoro algebra perspective it is very unnatural to introduce s as at least I know of no way to do the calculation with L_n and a_k with a free parameter s.
Another example are the infinities that arise in Feynman diagrams. Those arise when you do integrals over all momenta p. There are of course the usual tricks to avoid these infinities. But the reason they work is that the integral over all p is unphysical: For very large p, your quantum field theory is no longer the correct description and you should include quantum gravity effects or similar things. You should only integrate p up the scale where these other effects kick in and then do a proper computation that includes those effects. Again, the infinity disappears.
If you have a renormalisable theory you are especially lucky: There you don't really have to know the details of that high energy theory, you can subsume them into a proper redefinition of your coupling constants.
A similar thing can be seen in fluid dynamics: The Navier-Stokes equation has singular solutions much like Einstein's equations lead to singularities. So what shall we do with for example infinite pressure? Well, the answer is simple: The Navier-Stokes equation applies to a fluid. But the fluid equations are only an approximation valid at macroscopic scales. If you look at small scales you find individual water molecules and this discreteness is what saves you actually encountering infinite values.
There is an approach to perturbative QFT developed by Epstein and Glaser and explained for example in this book that demonstrates that the usual infinities arise only because you have not been careful enough earlier in your calculation.
There, the idea is that your field operators are actually operator valued distributions and that you cannot always multiply distributions. Sometimes you can, if their singularities (the places where they are not a function but really a distribution) are in different places or in different directions (in a precise sense) but in general you cannot.
The typical situation is that what you want to define (for example delta(x)^2) is still defined for a subset of your test functions. For example delta(x)^2 is well defined for test functions that vanish in a neighbourhood of 0. So you start with a distribution defined only for those test functions. Then, you want to extend that definition to all test-functions, even those that are finite around 0. It turns out that if you restrict the degree of divergence (the maximum number of derivatives acting on delta, this will later turn out to be related to the superficial scaling dimension) to be below some value, there is a finite dimensional solution space to this extension problem. In the case of phi^4 theory for example the two point distribution is fixed up to a multiple of delta(x) and a multiple of the d'Alambertian of delta(x), the solution space is two dimensional (if Lorentz invariance is taken into account). The two coefficients have to be fixed experimentally and of course are nothing but mass and wave function renormalisation. In this approach the counter terms are nothing but ambiguities of an extension problem of distributions.
I has been shown in highly technical papers, that this procedure is equivalent to BPHZ regularization and dimensional regularisation and thus it's save to use the physicist's short-cuts. But it's good to know that the infinities that one cures could have been avoided in the first place.
My last example is of slightly different flavour: Recently, I have met a number of mathematical physicists (i.e. mathematicians) that work on very complicated theorems about what they call stability of matter. What they are looking at is the quantum mechanics of molecules in terms of a Hamiltonian that includes a kinetic term for electrons and Coulomb potentials for electron-electron and electron-nucleus interactions. The position of the nuclei are external (classical) parameters and usually you minimise them with respect to the energy. What you want to show is that the spectrum of this Hamiltonian is bounded from below. This is highly non-trivial as the Coulomb potential itself alone is not bounded from below (-1/r becomes arbitrarily negative) and you have to balance it with the kinetic term. Physically, you want to show that you cannot gain an infinite amount of energy by throwing an electron into the nucleus.
Mathematically, this is a problem about complicated PDE's and people have made progress using very sophisticated tools. What is not clear to me is if this question is really physical: It could well be that it arises from an over-simplification: The nuclei are not point-like and thus the true charge distribution is not singular. Thus the physical potential is not unbounded from below. In addition, if you are worried about high energies (as would be around if the electron fell into a nucleus) the Schrödinger equation would no longer be valid and would have to be replaced with a Dirac equation and then of course the electro-magnetic interaction should no longer be treated classically and a proper QED calculation should be done. Thus if you are worried about what happens to the electron close to the nucleus in Schrödinger theory, you are asking an unphysical question. What still could be a valid result is that you show (and that might look very similar to a stability result) is that you don't really get out of the area of applicability of your theory as the kinetic term prevents the electrons from spending too much time very close to the nucleus (classically speaking).
What is shared by all these examples, is that some calculation of a physically finite property encounters infinities that have to be treated and I tried to show that those typically arise because earlier in your calculation you have not been careful and stretched an approximation beyond its validity. If you would have taken that into account there wouldn't have been an infinity but possible a much more complicated calculation. And in lucky cases (similar to the renormalisable situation) you can get away with ignoring these complications. However you can sleep much better if you know that there would have been another calculation without infinities.
Update: I have just found a very nice text by Terry Tao on a similar subject to "knowing there is a rigorous version somewhere".
Subscribe to:
Posts (Atom)