His take on the "Reductionist Program"
The reductionist program, roughly speaking, is to build up the description of Nature from a few laws that govern the behavior of elementary entities, and that can't be derived from anything simpler. This definition is loose at both ends.
On the output side, because in the course of our investigations we find that there are important facts about Nature that we have to give up on predicting. These are what we - after the fact! - come to call contingencies. Three historically important examples of different sorts are the number of planets in the Solar System, the precise moment that a radioactive nucleus will decay, or what the weather will be in Boston a year from today. In each of these cases, the scientific community at first believed that prediction would be possible. And in each case, it was a major advance to realize that there are good fundamental reasons why it is not possible.
On the input side, because it is difficult - perhaps impossible - ever to prove the non-existence of simpler principles. I'll revisit this aspect at the end of the lecture.
Nevertheless, and despite its horrible name, the reductionist program has been and continues to be both inspiring and astoundingly successful. Instead of trying to refine an arbitrary a priori definition of this program, it is more edifying to discuss its best fruits. For beyond question they do succeed to an astonishing extent in "reducing" matter to a few powerful abstract principles and a small number of parameters.
Cosmology
Although it is usually passed over in silence, I think it is very important philosophically, and deserves to be emphasized, that our standard cosmology is radically modest in its predictive ambitions. It consigns almost everything about the world as we find it to contingency. That includes not only the aforementioned question of the number of planets in the Solar System, but more generally every specific fact about every specific object or group of objects in the Universe, apart from a few large-scale statistical regularities. Indeed, specific structures are supposed to evolve from the primordial perturbations, and these are only characterized statistically. In inflationary models these perturbations arise as quantum fluctuations, and their essentially statistical character is a consequence of the laws of quantum mechanics.
This unavoidably suggests the question whether we might find ourselves forced to become even more radically modest. Let us suppose for the sake of argument the best possible case, that we had in hand the fundamental equations of physics. Some of my colleagues think they do, or soon will. Even then we have to face the question of what principle determines the solution of these equations that describes the observed Universe. Let me again suppose for the sake of argument the best possible case, that there is some principle that singles out a unique acceptable solution. Even then there is a question we have to face: If the solution is inhomogeneous, what determines our location within it?
Another rationalization for considering the Anthropic Principle
As we have just discussed, the laws of reductionist physics do not suffice to tell us about the specific properties of the Sun, or of Earth. Indeed, there are many roughly similar but significantly different stars and planets elsewhere in the same Universe. On the other hand, we can aspire to a rational, and even to some extent quantitative, “derivation” of the parameters of the Sun and Earth based on fundamental laws, if we define them not by pointing to them as specific objects – that obviates any derivation – but rather by characterizing broad aspects of their behavior.
In principle any behavior will do, but possibly the most important and certainly the most discussed is their role in supporting the existence of intelligent observers, the so-called anthropic principle. There are many peculiarities of the Sun and Earth that can be explained this way. A crude example is that the mass of the Sun could not be much bigger or much smaller than it actually is because it would burn out too fast or not radiate sufficient energy, respectively.
Now if the Universe as we now know it constitutes, like the Solar System, an inhomogeneity within some larger structure, what might be a sign of it? If the parameters of fundamental physics crucial to life – just the ones we’ve been discussing! – vary from place to place, and most places are uninhabitable, there would be a signature to expect. We should expect to find that some of these parameters appear very peculiar – highly nongeneric – from the point of view of fundamental theory, and that relatively small changes in their values would preclude the existence of intelligent observers. Weinberg has made a case that the value of the cosmological term Lambda fits this description;and I’m inclined to think that ... and several other combinations of the small number of ingredients in our reduced description of matter and astrophysics do too. A fascinating set of questions is suggested here, that deserves careful attention.
The Unreasonable Ineffectiveness of QCD
There some aspects of QCD I find deeply troubling – though I’m not sure if I should!
I find it disturbing that it takes vast computer resources, and careful limiting procedures, to simulate the mass and properties of a proton with decent accuracy. And for real-time dynamics, like scattering, the situation appears pretty hopeless. Nature, of course, gets such results fast and effortlessly. But how, if not through some kind of computation, or a process we can mimic by computation?
Does this suggest that there are much more powerful forms of computation that we might aspire to tap into? Does it connect to the emerging theory of quantum computers? These musings suggest some concrete challenges: Could a quantum computer calculate QCD processes efficiently? Could it defeat the sign problem, that plagues all existing algorithms with dynamical fermions? Could it do real-time dynamics, which is beyond the reach of existing, essentially Euclidean, methods?
Or, failing all that, does it suggest some limitation to the universality of computation?
Deeply related to this is another thing I find disturbing. If you go to a serious mathematics book and study the rigorous construction of the real number system, you will find it is quite hard work and cumbersome. QCD, and for that matter the great bulk of physics starting with classical Newtonian mechanics, has been built on this foundation. In practice, it functions quite smoothly. It would be satisfying, though, to have a “more reduced” description, based on more primitive, essentially discrete structures. Fredkin and recently Wolfram have speculated at length along these lines. I don’t think they’ve got very far, and the difficulties facing such a program are immense. But it’s an interesting issue.