Monday, September 12, 2011

What is hidden in an infinity?

by Daniele Oriti, Albert Einstein Institute, Golm, Germany


Matteo Smerlak, ENS Lyon
Title: Bubble divergences in state-sum models
PDF of the slides (180k)
Audio [.wav 25MB], Audio [.aif 5MB].

Physicists tend to dislike infinities. In particular, they take it very badly when the result of a calculation they are doing turns out to be not some number that they could compare with experiments, but infinite. No energy or distance, no velocity or density, nothing in the world around us has infinity as its measured value. Most  times, such infinities signal that we have not been smart enough in dealing with the physical system we are considering, that we have missed some key ingredient in its description, or used the wrong mathematical language in describing it. And we do not like to be reminded of our own lack of cleverness.

At the same time, and as a confirmation of the above, much important progress in theoretical physics has come out of a successful intellectual fight with infinities. Examples abound, but here is a historic one. Consider a large 3-dimensional hollow spherical object whose inside is made of some opaque material (thus absorbing almost all the light hitting it), and assume that it is filled with light (electromagnetic radiation) maintained at constant temperature. This object is named a black body. Imagine now that the object has a small hole from which a limited amount of light can exit. If one computes the total energy (i.e. considering all possible frequencies) of the radiation exiting from the hole, at a given temperature and at any given time, using the well-established laws of classical electromagnetism and classical statistical mechanics, one finds that it is infinite. Roughly, this calculation looks as follows: you have to sum all the contributions to the total energy of the radiation emitted (at any given time), coming from all the infinite modes of oscillation of the radiation, at the temperature T. Since there are infinite modes, the sum diverges. Notice that the same calculation can be performed by first imagining that there exists a maximum possible mode of oscillation, and then studying what
happens when this supposed maximum is allowed to grow indefinitely. After the first step, the calculation gives a finite result, but the original divergence is obtained again after the second step. In any case, this sum gives a divergent result: infinity! However, this two-step procedure allows to understand better how the quantity of interest diverges.

Beside being a theoretical absurdity, this is simply false on experimental grounds since such radiating objects can be realized rather easily in a laboratory. This represented a big crisis in classical physics at the end of the 19th century. The solution came from Max Planck with the hypothesis that light is in reality constituted by discrete quanta (akin to matter particles), later named photons, with a consequently different formula for the emitted radiation from the hole (more precisely, for the individual contributions). This hypothesis, initially proposed for completely different motivations, not only solved the paradox of the infinite energy, but spurred the quantum mechanics revolution which led (after the work of Bohr, Einstein, Heisenberg, Schroedinger, and many others) to the modern understanding of light, atoms and all fundamental forces (except gravity).

We see, then, that the need to understand what was really lying inside an infinity, the need to confront it, led to an important jump forward in our understanding of Nature (in this example, of light), and to a revision of our most cherished assumptions about it. The infinity was telling us just that. Interestingly, a similar theoretical phenomenon seems now to suggest that another, maybe even greater jump forward is needed and a new understanding of gravity and of spacetime itself.

An object that is theoretically very close to a perfect black body is a black hole. Our current theory of matter, quantum field theory, in conjunction with our current theory of gravity, General Relativity, predicts that such black hole will emit thermal radiation at a constant temperature inversely proportional to the mass of the black hole. This is called Hawking radiation. This result, together with the description of black holes provided by general relativity, also suggest that black holes have an entropy associated to them, measuring the number of their intrinsic degrees of freedom. Because a black hole is nothing but a particular configuration of space, this entropy is then a measure of the intrinsic degrees of freedom of (a region of) space itself! However, first of all we have no real clue what these intrinsic degrees of freedom are; second, if the picture of space provided by general relativity is correct, their number and their corresponding entropy is infinite!

This fact, together with a large number of other results and conceptual puzzles, prompted a large part of the theoretical physics community to look for a better theory of space (and time), possibly based on quantum mechanics (taking on board the experience from history): a quantum theory of space-time, a quantum theory of gravity.

It should not be understood that the transition from classical to quantum mechanics led us away from the problem of infinities in physics. On the contrary, our best theories of matter and of fundamental forces, quantum field theories, are full of infinities and divergent quantities. What we have learned, however, from quantum field theories, is exactly how to deal with such infinities in rather general terms, what to expect, and what to do when such infinities present themselves. In particular, we have learned another crucial lesson about nature: physical phenomena look very different at different energy and distance scales, i.e. if we look at them very closely or if they involve higher and higher energies. The methods by which we deal with this scale dependence go under the name of renormalization group, now a crucial ingredient of all theories of particles and materials, both microscopic and macroscopic. How this scale dependence is realized in practice depends of course on the specific physical system considered.

Let us consider a simple example. Consider the dynamics of a hypothetical particle with mass m and no spin; assume that what can happen to this particle during its evolution is only one of the following two possibilities: it can either disintegrate into two new particles of the same type or disintegrate into three particles of the same type. Also, assume that the inverse processes are also allowed (that is, two particles can disappear and give rise to a single new one, and the same can do three particles). So there are two possible ‘interactions’ that this type of particle can undergo, two possible fundamental processes that can happen to it. To each of them, we associate a parameter, called a ‘coupling constant’ that indicates how strong each possible interaction process is (compared with each other and with other possible processes due for example to the interaction of the particles with gravity or with light etc), one for the process involving three particles, and one for the one involving four  particles (this is counting incoming and outgoing particles). Now, the basic object that a quantum field theory allows us to compute is the probability (amplitude) that, if I first see a number n of particles at a certain time, at a later time I will instead see m particles, with m different from n (because some particle will have disintegrated and other will have been created). All the other quantities of physical interest can be obtained using these probabilities.

Moreover, the theory tells me exactly how this probability should be computed. It goes roughly as follows. First, I have to consider all possible processes leading from n particles to m particles, including those involving an infinite number of elementary creation/disintegration processes. These can be represented by graphs (called Feynman graphs) in which each vertex represents a possible elementary process (see the figure for an example of such process, made out of interactions involving three particles only, with associated graph,).



A graph describing a sequence of 3-valent elementary interactions for a point particle, with 2 particles measured both at the initial and at the final time (to be read from left to right)


Second, each of these processes should be assigned a probability (amplitude), that is, a function of the mass of the particle considered and of the ‘coupling constants’. Third, this amplitude is in turn a function of the energies of each particle involved in any given process.(and corresponding to a single line in the graph representing the process), and this energy can be anything, from zero to infinity. The theory tells me what form the probability amplitude has. Now the total probability for the measurement of n particle first and m particles later is computed by summing over all processes/graphs (including those composed of infinite elementary processes) and all the energies of particles involved in them, weighted by the probability amplitudes.

Now, guess what? The above calculation typically gives the always feared result: infinity. Basically, everything that could go wrong, actually goes wrong, as in Murphy’s law. Not only the sum over all the graphs/processes gives a divergent answer, but also the intermediate sum over energies diverges. However, as we anticipated, we now know how to deal with this kind of infinities, we are not scared anymore and, actually, we have learnt what they mean, physically. The problem mainly arises when we consider higher and higher energies for the particles involved in the process. For simplicity imagine that all the particles have the same energy E, and assume this can take any value from 0 to a maximum value Emax. Just like in the black body example, the existence of the maximum implies that the sum over energies is a finite number, so everything up to here goes fine. However, when we let the maximal energy becomes infinite, typically the same quantity becomes infinite.

We have done something wrong; let’s face it: there is something we have not understood of the physics of the system (simple particles as they may be). It could be that, as in the case of the blackbody radiation, we are missing something fundamental about the nature of these particles, and we have to change the whole probability amplitude. Maybe other type of particles have to be considered as created out of the initial ones. All this could be. However, what quantum field theory has taught us is that, before considering these more drastic possibilities, one should try to re-write the above calculation by considering coupling constants and mass, that themselves depend on the scale Emax, and then compute again the probability amplitude, but now using these ‘scale dependent’constants, and check if one can now consider the case of Emax growing up to infinity, i.e. consider arbitrary energies for the particles involved in the process. If this can be done, i.e. if one can find coupling constants dependent on the energies such that now the result of sending Emax to infinity, i.e. considering larger and larger energies, is a finite, sensible probability then there no need for further modifications of the theory, and the physical system considered, i.e. the (system of) particles, is under control.

What does all this teach us? It teaches us that the type of interactions that the system can
undergo and their relative strengths depend on the scale at which we look at the system, i.e. on what energy is involved in any process the system is experiencing. For example, it could happen that when Emax becomes higher and higher, the coupling constant as a function of Emax becomes zero. This would mean that, at very high energies, the process of disintegration of one particle into two (or two into one) does not happen anymore, and only the one involving four particles takes place. Pictorially, only graphs of a certain shape, become relevant. Or, it could happen that, at very high energies, the mass of the particles becomes zero,
i.e. the particles become lighter and lighter, eventually propagating just like photons do. The general lesson, beside technicalities and specific cases, is that for any given physical system it is crucial to understand exactly how the quantities of interest diverge, because in the details of such divergence lies important information about the true physics of the system considered. The infinities in our models should be tamed, explored in depth, and listened to.

This is what Matteo Smerlak and Valentin Bonzom have done in the work presented at the seminar, for some models of quantum space that are currently at the center of attention of the quantum gravity community. These are so-called spin foam models, in which quantum space is described in terms of spin networks (graphs whose links are assigned discrete numbers, spins, representing elementary geometric data) or equivalently in terms of collections of triangles glued to one another along edges, and whose geometry is specified by the length of all such edges. Spin foam models are then strictly related to both loop quantum gravity, whose dynamical aspects they seek to define, and to other approaches to quantum gravity like simplicial gravity. These models, very much like models for the dynamics of ordinary quantum particles, aim to compute (among other things) the probability to measure a given configuration of quantum space, represented again as a bunch of triangles glued together or as a spin network graph. Notice that here a ‘configuration of quantum space’ means both a given shape of space (it could be a sphere, a doughnut, or any other fancier shape), and a given geometry (it could be a very big or a very small sphere, a sphere with some bumps here and there, etc). One could also consider computing the probability of a transition from a given configuration of quantum space to a different one.

More precisely, the models that Bonzom and Smerlak studied are simplified ones (with respect to those that aim at describing our 4-dimensional space-time) in which the dynamics is such that, whatever the shape and geometry of space one is considering, during its evolution, should one measure the curvature of the same space at any given location, one would find zero. In other words these models only consider flat space-times. This is of course a drastic simplification but not such that the resulting models become uninteresting. On the contrary, these flat models are not only perfectly fine to describe quantum gravity in the case in which space has only two dimensions, rather than three, but are also the very basis for constructing realistic models
for 3-dimensional quantum space, i.e. 4-dimensional quantum spacetime. As a consequence, these models, together with the more realistic ones, have been a focus of attention of the community of quantum gravity researchers.

What is the problem being discussed, then? As you can imagine, the usual one: when one tries to compute the mentioned probability for a certain evolution of quantum space, even within these simplified models, the answer one gets is the ever-present, but by now only slightly intimidating, infinity. How does the calculation look like? It looks very similar to the calculation for the probability of a given process of evolution of particles in quantum field theory. Consider the case in which space is 2-dimensional and therefore space-time is 3-dimensional. Suppose you want to consider the probability of measuring first n triangles glued along one another to form, say, a 2-dimensional sphere (the surface of a soccer ball) of a given size, and then m triangles now glued to form, say, the surface of a doughnut. Now take a collection of an arbitrary number of triangles and glue them to one another along edges to form a 3-dimensional object of your choice, just like kids stick LEGO blocks to one another to form a house or a car or some spaceship (you see, science is in many ways the development of children’s curiosity by other means). It could be as simple as a soccer ball, in principle, or something extremely complicated, with holes, multiple connections, anything). There is only one condition on the 3-dimensional object you can build: its surface should be formed, in the example we are considering here, by two disconnected parts: one in the shape of a sphere made of n triangles, and one in the shape of the surface of a doughnut made of m triangles. This condition would for example prevent you from building a soccer ball, which you could do, instead, if one wanted to consider only the probability of measuring n triangles forming a sphere, and no doughnut was involved. Too bad. We’ll be lazy in this example and consider a doughnut but no soccer ball. Anyway, apart from this, you can do anything.

Let us pause for a second to clarify what it means for a space to have a given shape. Consider a point on the sphere and take a path on it that starts at the given point and after a while comes back to the same point, forming a loop. Now you see that there is no problem in taking this loop to become smaller and smaller, eventually shrinking to a point and disappearing. Now do the same operation on the surface of a doughnut. You will see that certain loops can again be shrunk to a point and made disappear, while others cannot. These are the ones that go around the hole of the doughnut. So you see that operations like these can help us determining the shape of our space. The same holds true for 3d spaces, in fact, you only need many more types of operations of this type. Ok, now you finish building your 3-dimensional object made of as many triangles as you want. Just as the triangles in the boundary of the 3d object, those forming the sphere and the doughnut, also those forming the 3d object come with numbers associated to the edges of the triangles. These numbers, as said, specify the geometry of all the triangles, and therefore of the sphere, of the doughnut and of the 3d object that has them on its boundary.


A collection of glued triangles forming a sphere (left) and a doughnut (right); the interior 3d space can alsobe built out of glued triangles having the given shape on the boundary: for the first object, the interior is a ball; for the second it forms what is called a solid torus. Pictures from http://www.hakenberg.de/

The theory (the spin foam model you are studying) should give you a probability for the process considered. If the triangles forming the sphere represent how quantum space was at first, and the triangles forming the doughnut how it is in the end, the 3d object chosen represent a possible quantum space-time. In the analogy with the particle process described earlier, the n triangles forming a sphere correspond to the initial n particles, the m triangles forming the doughnut correspond to the final m particles, and the triangulated 3d object is the analogue of a possible ‘interaction process’, a possible history of triangles being created/destroyed, forming different shapes and changing their size; this size is encoded in their edge lengths, which is the analogue of the energies of the particles. The spin foam model now gives you the probability for the process in the form of a sum over the probabilities for all possible assignments of lengths to the edges of the 3d object, each probability function enforcing that the 3d object is flat (it gives probability equal to zero if the 3d object is not flat). As anticipated, the above calculation gives the usual nonsensical infinity as a result. But again, we now know that we should get past the disappointment, and look more carefully at what this infinity hides. So what one does is again to imagine that there is a maximal length that edges of triangles can have, call it Emax, define the truncated amplitude and study carefully exactly how it behaves when Emax grows, when it is allowed to become larger and larger.

In a sense, in this case, what is hidden inside this infinity is the whole complexity of a 3d space, at least of a flat one. What one finds is that hidden in this infinity, and carefully revealed by the scaling of the above amplitude with Emax, is all the information about the shape of the 3d object, i.e. of the possible 3d spacetime considered, and all the information about how this 3d spacetime has been constructed out of triangles. That’s lots of information!

Bonzom and Smerlak, in the work described at the seminar, have gone a very long way toward unraveling all this information, dwelling deeper and deeper into the hidden secrets of this particular infinity. Their work is developed in a series of papers, in which they offer a very elegant mathematical formulation of the problem and a new approach toward its solution, progressively sharpening their results and improving our understanding of these specific spin foam models for quantum gravity, of the way they depend on the shape and on the specific construction of each 3d spacetime, and of what shape and construction give, in some sense, the ‘bigger’infinity. Their work represented a very important contribution to an area of research that is growing fast and in which many other results, from other groups around the world, had already been obtained and are still being obtained nowadays.

There is even more. The analogy with particle processes in quantum field theory can be made sharper, and one can indeed study peculiar types of field theories, called ‘group field theories’, such that the above amplitude is indeed generated by the theory and assigned to the specific process, as in spin foam models, and at the same time all possible processes are taken into account, as in standard quantum field theories for particles.

This change of framework, embedding the spin foam model into a field theory language, does
not change much the problem of the divergence of the sum over the edge lengths, nor its infinite result.
And it does not change the information about the shape of space encoded in this infinity. However, it changes the perspective by which we look at this infinity and at its hidden secrets. In fact, in this new context, space and space-time are truly dynamical, all possible spaces and space-times have to be considered together and on equal footing and compete in their contribution to the total probability for a certain transition from one configuration of quantum space to another. We cannot just choose one given shape, do the calculation and be content with it (once we dealt with the infinity resulting from doing the calculation naively). The possible space-times we have to consider, moreover, include really weird ones, with billions of holes and strange connections from one regions to another, and 3d objects that do not really look like sensible space-times at all, and so on. We have to take them all into account, in this framework. This is of course an additional technical complication. However, it is also a fantastic opportunity. In fact, it offers us the chance to ask and possibly answer a very interesting question: why is our space-time, at least at our macroscopic scale, the way it is? Why does it look so regular, so simple in its shape, actually as simple as a sphere is? Try! we can consider an imaginary loop located anywhere in space and shrink it to a point making it disappear without any trouble, right? If the dynamics of quantum space is governed by a model (spin foam or group field theory) like the ones described, this is not obvious at all, but something to explain. Processes that look as nice as our macroscopic space-time are but a really tiny minority among the zillions of possible space-times that enter the sum we discussed, among all the possible processes that have to be considered in the above calculations. So, why should they ‘dominate’ and end up being the truly important ones, those that best approximate our macroscopic space-time? Why and how do they ‘emerge’ from the others and originate, from this quantum mess, the nice space-time we inhabit, in a classical, continuum approximation? What is the true quantum origin of space-time, in both its shape and geometry? The way the amplitudes grow with the increase of Emax is where the answer to these fascinating questions lies.

The answer, once more, is hidden in the very same infinity that Bonzom, Smerlak, and their many quantum gravity colleagues around the world are so bravely taming, studying, and, step by step, understanding.