Thursday, June 09, 2005

QCD and Natural Philosophy

by Frank Wilczek. This is getting embarrassing! The embarrassment of Wilczek riches continues with this 2002 preprint at the physics archive.

His take on the "Reductionist Program"
The reductionist program, roughly speaking, is to build up the description of Nature from a few laws that govern the behavior of elementary entities, and that can't be derived from anything simpler. This definition is loose at both ends.
On the output side, because in the course of our investigations we find that there are important facts about Nature that we have to give up on predicting. These are what we - after the fact! - come to call contingencies. Three historically important examples of different sorts are the number of planets in the Solar System, the precise moment that a radioactive nucleus will decay, or what the weather will be in Boston a year from today. In each of these cases, the scientific community at first believed that prediction would be possible. And in each case, it was a major advance to realize that there are good fundamental reasons why it is not possible.
On the input side, because it is difficult - perhaps impossible - ever to prove the non-existence of simpler principles. I'll revisit this aspect at the end of the lecture.
Nevertheless, and despite its horrible name, the reductionist program has been and continues to be both inspiring and astoundingly successful. Instead of trying to refine an arbitrary a priori definition of this program, it is more edifying to discuss its best fruits. For beyond question they do succeed to an astonishing extent in "reducing" matter to a few powerful abstract principles and a small number of parameters.

Cosmology
Although it is usually passed over in silence, I think it is very important philosophically, and deserves to be emphasized, that our standard cosmology is radically modest in its predictive ambitions. It consigns almost everything about the world as we find it to contingency. That includes not only the aforementioned question of the number of planets in the Solar System, but more generally every specific fact about every specific object or group of objects in the Universe, apart from a few large-scale statistical regularities. Indeed, specific structures are supposed to evolve from the primordial perturbations, and these are only characterized statistically. In inflationary models these perturbations arise as quantum fluctuations, and their essentially statistical character is a consequence of the laws of quantum mechanics.
This unavoidably suggests the question whether we might find ourselves forced to become even more radically modest. Let us suppose for the sake of argument the best possible case, that we had in hand the fundamental equations of physics. Some of my colleagues think they do, or soon will. Even then we have to face the question of what principle determines the solution of these equations that describes the observed Universe. Let me again suppose for the sake of argument the best possible case, that there is some principle that singles out a unique acceptable solution. Even then there is a question we have to face: If the solution is inhomogeneous, what determines our location within it?

Another rationalization for considering the Anthropic Principle
As we have just discussed, the laws of reductionist physics do not suffice to tell us about the specific properties of the Sun, or of Earth. Indeed, there are many roughly similar but significantly different stars and planets elsewhere in the same Universe. On the other hand, we can aspire to a rational, and even to some extent quantitative, “derivation” of the parameters of the Sun and Earth based on fundamental laws, if we define them not by pointing to them as specific objects – that obviates any derivation – but rather by characterizing broad aspects of their behavior.
In principle any behavior will do, but possibly the most important and certainly the most discussed is their role in supporting the existence of intelligent observers, the so-called anthropic principle. There are many peculiarities of the Sun and Earth that can be explained this way. A crude example is that the mass of the Sun could not be much bigger or much smaller than it actually is because it would burn out too fast or not radiate sufficient energy, respectively.
Now if the Universe as we now know it constitutes, like the Solar System, an inhomogeneity within some larger structure, what might be a sign of it? If the parameters of fundamental physics crucial to life – just the ones we’ve been discussing! – vary from place to place, and most places are uninhabitable, there would be a signature to expect. We should expect to find that some of these parameters appear very peculiar – highly nongeneric – from the point of view of fundamental theory, and that relatively small changes in their values would preclude the existence of intelligent observers. Weinberg has made a case that the value of the cosmological term Lambda fits this description;and I’m inclined to think that ... and several other combinations of the small number of ingredients in our reduced description of matter and astrophysics do too. A fascinating set of questions is suggested here, that deserves careful attention.

The Unreasonable Ineffectiveness of QCD

There some aspects of QCD I find deeply troubling – though I’m not sure if I should!
I find it disturbing that it takes vast computer resources, and careful limiting procedures, to simulate the mass and properties of a proton with decent accuracy. And for real-time dynamics, like scattering, the situation appears pretty hopeless. Nature, of course, gets such results fast and effortlessly. But how, if not through some kind of computation, or a process we can mimic by computation?
Does this suggest that there are much more powerful forms of computation that we might aspire to tap into? Does it connect to the emerging theory of quantum computers? These musings suggest some concrete challenges: Could a quantum computer calculate QCD processes efficiently? Could it defeat the sign problem, that plagues all existing algorithms with dynamical fermions? Could it do real-time dynamics, which is beyond the reach of existing, essentially Euclidean, methods?
Or, failing all that, does it suggest some limitation to the universality of computation?
Deeply related to this is another thing I find disturbing. If you go to a serious mathematics book and study the rigorous construction of the real number system, you will find it is quite hard work and cumbersome. QCD, and for that matter the great bulk of physics starting with classical Newtonian mechanics, has been built on this foundation. In practice, it functions quite smoothly. It would be satisfying, though, to have a “more reduced” description, based on more primitive, essentially discrete structures. Fredkin and recently Wolfram have speculated at length along these lines. I don’t think they’ve got very far, and the difficulties facing such a program are immense. But it’s an interesting issue.

Inventory and Outlook of High Energy Physics

by Frank Wilczek. The list of well-written articles by Wilczek continues with this 2002 preprint at the physics archive.

"No Conclusion"
But all this progress should not mark an end. Rather it allows us to ask – that’s easy enough! – and (more impressive) to take meaningful, concrete stabs at answering some truly awesome questions. Do all the fundamental interactions derive from a single underlying principle? What is the quantum symmetry of space-time? To what extent are the laws of physics uniquely determined? Why is there any (baryonic) matter at all? What makes the dark matter? Why is there so little dark energy, compared to what it “should” be? Why is there so much, compared to everything else in the Universe? These are not merely popularizations or vulgarizations but genuine, if schematic, descriptions of a few of our ongoing explorations.

The Universe is a Strange Place

by Frank Wilczek. Continuing the Wilczek orgy - preprint in the physics archive. This time the subject is cosmology, instead of particle physics. The abstract:
TThis is a broad and in places unconventional overview of the strengths and shortcomings of our standard models of fundamental physics and of cosmology. The emphasis is on ideas that have accessible experimental consequences. It becomes clear that the frontiers of these subjects share much ground in common.


"Cosmology"
..., one set of exogenous parameters in the standard model of cosmology specifies a few average properties of matter, taken over large spatial volumes. These are the densities of ordinary matter (i.e., of baryons), of dark matter, and of dark energy.
We know quite a lot about ordinary matter, of course, and we can detect it at great distances by several methods. It contributes about 3% of the total density.
Concerning dark (actually, transparent) matter we know much less. It has been “seen” only indirectly, through the influence of its gravity on the motion of visible matter. We observe that dark matter exerts very little pressure, and that it contributes about 30% of the total density.
Finally dark (actually, transparent) energy contributes about 67% of the total density. It has a large negative pressure. From the point of view of fundamental physics this dark energy is quite mysterious and disturbing, as I’ll elaborate shortly below.

Cosmic Rays
Perhaps not quite so sharply posed, but still very promising, is the problem of the origin of the highest energy cosmic rays. It remains controversial whether there so many events observed at energies above those where protons or photons could travel cosmological distances that explaining their existence requires us to invoke new fundamental physics. However this plays out, we clearly have a lot to learn about the compositions of these events, their sources, and the acceleration mechanisms.

Is the Universe a Strange Place?
The observed values of the ratios ...[formulas for the cosmological density ratios] are extremely peculiar from the point of view of fundamental physics, as currently understood. Leading ideas from fundamental theory about the origin of dark matter and the origin of baryon number ascribe them to causes that are at best very remotely connected, and existing physical ideas about the dark energy, which are sketchy at best, don’t connect it to either of the others. Yet the ratios are observed to be close to unity. And the fact that these ratios are close to unity is crucial to cosmic ecology; the world would be a very different place if their values were grossly
different from what they are.
Several physicists, among whom S. Weinberg was one of the earliest and remains among the most serious and persistent, have been led to wonder whether it might be useful, or even necessary, to take a different approach, invoking anthropic reasoning. Many physicists view such reasoning as a compromise or even a betrayal of the goal of understanding the world in rational, scientific terms. Certainly, some adherents of the “Anthropic Principle” have overdone it. No such “Principle” can substitute for deep principles like symmetry and locality, which support a vast wealth of practical and theoretical applications, or the algorithmic
description of Nature in general. But I believe there are specific, limited circumstances in which anthropic reasoning is manifestly appropriate and unavoidable.

The highlighted opinion concurs with my own - for whatever that's worth. The Anthropic Principle seems like one of those things which get invoked when you run out of better ideas. That Weinberg and Wilczek, who have had many very good ideas indeed, are driven to this resort seems like a bad sign to me.

A Constructive Critique of the Three Standard Systems

by F. Wilczek. Preprint in the physics archive.
Abstract:
It has become conventional to say that our knowledge of fundamental physical law is summarized in a Standard Model. But this convention lumps together two quite different conceptual structures, and leaves out another. I think it is more accurate and informative to say that our current, working description of fundamental physics is based on three standard conceptual systems. These systems are very different; so different, that it is not inappropriate to call them the Good, the Bad, and the Ugly. They concern, respectively, the coupling of vector gauge particles, gravitons, and Higgs particles. It is quite a remarkable fact, in itself, that every nonlinear interaction we need to summarize our present knowledge of the basic (i.e., irreducible) laws of physics involves one or another of these particles.

More critical remarks on the Standard Model:
Looking critically at the structure of a single standard model family, as displayed in Figure 3, one has no trouble picking out flaws.
The gauge symmetry contains three separate pieces, and the fermion representation contains five separate pieces. While this is an amazingly tight structure, considering the wealth of phenomena described, it clearly fails to achieve the ultimate in simplicity and irreducibility. Let me remind you, in this context, that electroweak “unification” is something of a misnomer. There are still two separate symmetries, and two separate coupling constants, in the electroweak sector of the standard model. It is much more accurate to speak of electroweak “mixing”.
Worst of all, the abelian U(1) symmetry is powerless to quantize its corresponding charges. The hypercharge assignments – indicated in Figure 3 by the numerical subscripts – must be chosen on purely phenomenological grounds. On the face of it, they appear in a rather peculiar pattern. If we are counting continuous parameters, the freedom to choose their values takes us from three to seven (and more, if we restore the families). The electrical neutrality of atoms is a striking and fundamental fact, which has been checked to extraordinary precision, and which is central to our understanding of Nature. In the standard model this fact appears, at a classical level, to require finely tuned hand-adjustment.

Gravity
What makes this very tight, predictive, and elegant theory of quantum gravity “bad” is not that there is any experiment that contradicts it. There isn’t. Nor, I think, is the main problem that this theory cannot supply predictions for totally academic thought experiments about ultrahigh energy behavior. It can’t, but there are more pressing issues, that might have more promise of leading to contact between theory and empirical reality.
A great lesson of the standard model is that what we have been evolved to perceive as empty space is in fact a richly structured medium. It contains symmetry-breaking condensates associated with electroweak superconductivity and spontaneous chiral symmetry breaking in QCD, an effervescence of virtual particles, and probably much more. Since gravity is sensitive to all forms of energy it really ought to see this stuff, even if we don’t. A straightforward estimation suggests that empty space should weigh several orders of magnitude of orders of magnitude (no misprint here!) more than it does. It “should” be much denser than a neutron star, for example. The expected energy of empty space acts like dark energy, with negative pressure, but there’s much too much of it.
To me this discrepancy is the most mysterious fact in all of physical science, the fact with the greatest potential to rock the foundations. We’re obviously missing some major insight here. Given this situation, it’s hard to know what to make of the ridiculously small amount of dark energy that presently dominates the Universe!

The Flavor/Higgs Sector

We know of no deep principle, comparable to gauge symmetry or general covariance, which constrains the values of these couplings tightly. For that reason, it is in this sector where continuous parameters proliferate, into the dozens. Basically, we introduce each observed mass and weak mixing angle as an independent input, which must be determined empirically. The phenomenology is not entirely out of control: the general framework (local relativistic quantum field theory, gauge symmetry, and renormalizability) has significant consequences, and even this part of the standard model makes many non-trivial predictions and is highly over-constrained. ...
Neutrino masses and mixings can be accommodated along similar lines, if we expand the framework slightly. ... The flavor/Higgs sector of fundamental physics is its least satisfactory part. Whether measured by the large number of independent parameters or by the small number of powerful ideas it contains, our theoretical description of this sector does not attain the same level as we’ve reached in the other sectors. This part really does deserve to be called a “model” rather than a “theory”.

Another cute sarcastic remark (somewhat unfairly out of context):
Finally let me mention one redeeming virtue of the Higgs sector. (“Virtue” might be too strong; actually, what I’m about to do is more in the nature of advertising a bug as a feature.)

Longing for the Harmonies

by Frank Wilczek and Betsy Devine.
As I was reading the Wilczek preprints I thought "Wilczek is such a good writer he should do a book for the general public". Well he has, I have it, I've read it, but it was so long ago (1988), unfortunately I have only vague memories, but mildly positive ones.

Yang-Mills Theory In, Beyond, and Behind Observed Reality

by Frank Wilczek, preprint in the physics archive.
Abstract
The character of jets is dominated by the influence of intrinsically nonabelian gauge dynamics. These proven insights into fundamental physics ramify in many directions, and are far from being exhausted. I will discuss three rewarding explorations from my own experience, whose point of departure is the hard Yang-Mills interaction, and whose end is not yet in sight. Given an insight so profound and fruitful as Yang and Mills brought us, it is in order to try to consider its broadest implications, which I attempt at the end.


Wilczek is an excellent writer (see especially his Nobel lecture in the previous post). This article probably isn't quite as accessible to a general audience (as you can probably see from the abstract). The average reader is probably not familiar with "Yang-Mills" theories and the article doesn't try to explain them, but it's still worth a peek, if you can skim past any unfamiliar technicalities.

For example: "But as physicists hungry for answers, we properly regard strict mathematical rigor as a desirable luxury, not an indispensable necessity". Which might surprise non-physicists - mathematicians are often appalled by the pseudo-mathematical activities of theoretical physicists.

Here's another quote along the same lines, this time from "Quantum Theory: Concepts and Methods" by Asher Peres.
Physicists usually have a nonchalant attitude when the number of dimensions is extended to infinity. Optimism is the rule, and every infinite sequence is presumed to be convergent, unless proven guilty.

While a mathematician would probably treat an infinite sequence with extreme suspicion until it was proved to converge.

Our current theories of particle physics are not really viewed as anything more than "low-energy" approximations, they can't even be made to produce sensible answers at higher energies. Here's another quote from the Wilczek article:
A slightly different perspective on renormalizability is associated with the philosophy of effective field theory. According to this philosophy it is presumptuous, or at least unnecessarily committal, to demand that our theories be self-contained up to arbitrarily large energies. So we should not demand that the effect of a high-mass cutoff, which marks the breakdown of our effective theory, can be removed entirely. Instead, we acknowledge that new degrees of freedom may open up at the large mass scale, and we postulate only that these degrees of freedom approximately decouple from low-scale physics. By requiring that the effective theory they leave behind should be self-contained and approximately valid up to the high mass scale, we are then led to a similar "effective" veto, which outlaws quantitatively significant nonrenormalizable couplings.
Of course, this philosophy only puts off the question of consistency, passing that burden on to the higher mass-scale theory. Presumably this regress must end somewhere, either in a fully consistent quantum field theory or in something else (string theory?).


Here's another version on this theme, this time by Steven Weinberg in his 1986 Dirac Memorial Lecture Towards the Final Laws of Physics (which is included in the slim volume Elementary Particles and the Laws of Physics by Richard P. Feynman & Steven Weinberg - Feynman article is also very good).
Most theoretical physicists today have come around to the point of view that the standard model of which we're so proud, the quantum field theory of weak, electromagnetic and strong interactions, is nothing more than a low energy approximation to a much deeper and quite different underlying field theory


The concluding section of Wilczek's article is titled "Patterns of Explanation", it's especially nice:
If there are to be simple explanations for complex phenomena, what form can they take?
One archetype is symmetry. In fundamental physics, especially in the twentieth century, symmetry has been the most powerful and fruitful guiding principle. By tying together the description of physical behavior in many different circumstances – at different places, at different times, viewed at different speeds and, of course, in different gauges! – it allows us to derive a wealth of consequences from our basic hypotheses. When combined with the principles of quantum theory, symmetry imposes very stringent consistency requirements, as we have discussed, leading to tight, predictive theories, of which Yang-Mills theory forms the archetype within the archetype.
(In the present formulation of physics quantum theory itself appears as a set of independent principles, which loosely define a conceptual framework. It is not absurd to hope that in the future these principles will be formulated more strictly, in a way that involves symmetry deeply.)
A different archetype, which pervades biology and cosmology, is the unfolding of a program. Nowadays we are all familiar with the idea that simple computer programs, unfolded deterministically according to primitive rules, can produce fantastically complicated patterns, such as the Mandelbrot set and other fractals; and with the idea that a surprisingly small library of DNA code directs biological development.
These archetypes are not mutually exclusive. ConwayÂ’s Game of Life, for example, uses simple, symmetric, deterministic rules, always and everywhere the same; but it can, operating on simple input, produce extremely complex, yet highly structured output.
In fundamental physics to date, we have mostly got along without having to invoke partial unfolding of earlier, primary simplicity as a separate explanatory principle. In constructing a working model of the physical world, to be sure, we require specification of initial conditions for the fundamental equations. But we have succeeded in paring these initial conditions down to a few parameters describing small departures from space-time homogeneity and thermal equilibrium in the very early universe; and the roles of these two aspects of world-construction, equations and initial conditions, have remained pretty clearly separated. Whether symmetry will continue to expand its explanatory scope, giving rise to laws of such power that their solution is essentially unique, thus minimizing the role of initial conditions; or whether “fundamental” parameters (e.g., quark and lepton masses and mixing angles) in fact depend upon our position within an extended, inhomogeneous Multiverse, so that evolutionary and anthropic considerations will be unavoidable; or whether some deeper synthesis will somehow remove the separation, is a great question for the future.