
Article in the New York Times on Saddam Hussein's latest literary effort. Doubt I'll read it (it's only available in Arabic at the moment) but the cover is cool.
A Survey of the Sciences and Arts
Males pit their genes against females by chucking DNA out of eggs.
The sperm of the male ant appears to be able to destroy the female DNA within a fertilized egg, giving birth to a male that is a clone of its father. Meanwhile the female queens make clones of themselves to carry on the royal female line
In an ant species — or is it two species? — females are produced only by females and males only by males. Explanations of this revelation have to invoke some decidedly offbeat patterns of natural selection.
How do neurons in the brain represent movie stars, famous buildings and other familiar objects? Rare recordings from single neurons in the human brain provide a fresh perspective on the question.
'Grandmother cell' is a term coined by J. Y. Lettvin to parody the simplistic notion that the brain has a separate neuron to detect and represent every object (including one's grandmother). The phrase has become a shorthand for invoking all of the overwhelming practical arguments against a one-to-one object coding scheme. No one wants to be accused of believing in grandmother cells. But on page 1102 of this issue, Quiroga et al.3 describe a neuron in the human brain that looks for all the world like a 'Jennifer Aniston' cell. Ms Aniston could well become a grandmother herself someday. Are vision scientists now forced to drop their dismissive tone when discussing the neural representation of matriarchs?
The three years 2001 to 2003 were the golden years of solar neutrino research. In this period, scientists solved a mystery with which they had been struggling for four decades. The solution turned out to be important for both physics and for astronomy. In this article, I tell the story of those fabulous three years.
Entanglement is perhaps the most non-classical manifestation of quantum mechanics. Among its many interesting applications to information processing, it can be harnessed to reduce the amount of communication required to process a variety of distributed computational tasks. Can it be used to eliminate communication altogether? Even though it cannot serve to signal information between remote parties, there are distributed tasks that can be performed without any need for communication, provided the parties share prior entanglement: this is the realm of pseudo-telepathy.
One of the earliest uses of multi-party entanglement was presented by Mermin in 1990. Here we recast his idea in terms of pseudo-telepathy: we provide a new computer-scientist-friendly analysis of this game. We prove an upper bound on the best possible classical strategy for attempting to play this game, as well as a novel, matching lower bound. This leads us to considerations on how well imperfect quantum-mechanical apparatus must perform in order to exhibit a behaviour that would be classically impossible to explain. Our results include improved bounds that could help vanquish the infamous detection loophole.
Much of my current research involves studies of the sexually cannibalistic Australian redback spider (Latrodectus hasselti), and its close relatives, the black widows (genus Latrodectus). Redback spiders are intriguing because males actively 'encourage' females to cannibalize them while they mate. Unlike most other sexually cannibalistic species (e.g., praying mantids) where males attempt to escape from the female's jaws, redback males actually 'somersault' onto the female's mouthparts during copulation = male sexual sacrifice.
(A) Copulation begins with the male standing on the female's abdomen. Both spiders are facing in the same direction and are 'belly to belly'. The male has two copulatory organs (the palps) that are attached at the anterior-most part of his 'head' (cephalothorax). Copulation begins when one of the palps is inserted into the female's genital opening. In most other black widow spiders, the pair copulates while in this posture.
(B) In redbacks, however, a few seconds after palp insertion, the male, using the palp as a pivot, moves into a 'headstand' posture.
(C) The male then quickly turns through 180 degrees, landing with his 'back' (the dorsal surface of the abdomen), directly above the female's fangs. In most matings, the female begins to extrude digestive enzymes almost immediately. She also pierces the male's abdomen with her fangs and begins to consume him while he is transferring sperm.
Take a look at these two snapshots. Which man is more babyfaced? Most viewers would say it's the person on the right. And that's the person who lost a 2004 U.S. congressional election to his more mature-faced and competent-looking opponent. In fact, about 70% of recent U.S. Senate races were accurately predicted based on which candidates looked more competent from a quick glance at their faces. This remarkable effect, reported by Todorov et al. on page 1623 of this issue, likely reflects differences in "babyfacedness". A more babyfaced individual is perceived as less competent than a more mature-faced, but equally attractive, peer of the same age and sex. Although we like to believe that we "don't judge a book by its cover," superficial appearance qualities such as babyfacedness profoundly affect human behavior in the blink of an eye.
A rapid and large global warming event, the Paleocene-Eocene Thermal Maximum (PETM), raised interior ocean temperatures by 4º to 5ºC around 55 million years ago, a rise not equaled in any single event since then. This warming, whose origin is still debated, was accompanied by a dramatic negative carbon isotopic excursion. One hypothesis is that the release of 2000 gigatons of carbon from the destabilization of methane clathrates on the sea floor account for both the carbon isotopic signal and the temperature increase. Zachos et al. (p. 1611) now show that the carbonate compensation depth (roughly the depth at which calcium carbonate is no longer found in the sediment, because of dissolution during sinking) of the ocean rose by more than 2 kilometers during the PETM, which could have happened only if the amount of CO2 added to the ocean was much more than that which has been estimated in the clathrate scenario. They find that 4000 gigatons of carbon would have been needed, so the release of clathrates alone could not have been the cause of the warming.
Some male prairie voles are devoted fathers and faithful partners, while others are less satisfactory on both counts. The spectrum of behavior is shaped by a genetic mechanism that allows for quick evolutionary changes, two researchers from Emory University report in today's issue of Science.
...
People have the same variability in their DNA, with a control section that comes in at least 17 lengths detected so far, Dr. Young said.
...
The Emory researchers recently noticed that in their prairie vole colony, some fathers spent more time with their pups and some less. They traced the source of this variability to its molecular roots, a variation in the length of the DNA region that controls a certain gene.
This is the gene for the vasopressin receptor, the device used by neurons to respond to vasopressin. Voles with long and short DNA segments had different patterns of vasopressin receptors in their brains, which presumably changed their response to the hormone.
Prairie voles are renowned for being faithful mates, but some individuals are more faithful than others. The difference may lie in their so-called junk DNA.
... Elizabeth Hammock and Lawrence Young of Emory University in Atlanta, Georgia, report that fidelity and other social behaviors in male prairie voles seem to depend on the length of a particular genetic sequence in a stretch of DNA between their genes. The longer this repetitive sequence, or microsatellite, the more attentive males were to their female partner and their offspring. Those with shorter microsatellites neglected their mates and pups, at least to some degree.
Although there's no evidence that human infidelity or poor parenting stems from similar variations, Hammock and Young, as well as other researchers, have begun to explore whether microsatellites can account for behavioral differences between people and primates such as chimps and bonobos.
The reductionist program, roughly speaking, is to build up the description of Nature from a few laws that govern the behavior of elementary entities, and that can't be derived from anything simpler. This definition is loose at both ends.
On the output side, because in the course of our investigations we find that there are important facts about Nature that we have to give up on predicting. These are what we - after the fact! - come to call contingencies. Three historically important examples of different sorts are the number of planets in the Solar System, the precise moment that a radioactive nucleus will decay, or what the weather will be in Boston a year from today. In each of these cases, the scientific community at first believed that prediction would be possible. And in each case, it was a major advance to realize that there are good fundamental reasons why it is not possible.
On the input side, because it is difficult - perhaps impossible - ever to prove the non-existence of simpler principles. I'll revisit this aspect at the end of the lecture.
Nevertheless, and despite its horrible name, the reductionist program has been and continues to be both inspiring and astoundingly successful. Instead of trying to refine an arbitrary a priori definition of this program, it is more edifying to discuss its best fruits. For beyond question they do succeed to an astonishing extent in "reducing" matter to a few powerful abstract principles and a small number of parameters.
Although it is usually passed over in silence, I think it is very important philosophically, and deserves to be emphasized, that our standard cosmology is radically modest in its predictive ambitions. It consigns almost everything about the world as we find it to contingency. That includes not only the aforementioned question of the number of planets in the Solar System, but more generally every specific fact about every specific object or group of objects in the Universe, apart from a few large-scale statistical regularities. Indeed, specific structures are supposed to evolve from the primordial perturbations, and these are only characterized statistically. In inflationary models these perturbations arise as quantum fluctuations, and their essentially statistical character is a consequence of the laws of quantum mechanics.
This unavoidably suggests the question whether we might find ourselves forced to become even more radically modest. Let us suppose for the sake of argument the best possible case, that we had in hand the fundamental equations of physics. Some of my colleagues think they do, or soon will. Even then we have to face the question of what principle determines the solution of these equations that describes the observed Universe. Let me again suppose for the sake of argument the best possible case, that there is some principle that singles out a unique acceptable solution. Even then there is a question we have to face: If the solution is inhomogeneous, what determines our location within it?
As we have just discussed, the laws of reductionist physics do not suffice to tell us about the specific properties of the Sun, or of Earth. Indeed, there are many roughly similar but significantly different stars and planets elsewhere in the same Universe. On the other hand, we can aspire to a rational, and even to some extent quantitative, “derivation” of the parameters of the Sun and Earth based on fundamental laws, if we define them not by pointing to them as specific objects – that obviates any derivation – but rather by characterizing broad aspects of their behavior.
In principle any behavior will do, but possibly the most important and certainly the most discussed is their role in supporting the existence of intelligent observers, the so-called anthropic principle. There are many peculiarities of the Sun and Earth that can be explained this way. A crude example is that the mass of the Sun could not be much bigger or much smaller than it actually is because it would burn out too fast or not radiate sufficient energy, respectively.
Now if the Universe as we now know it constitutes, like the Solar System, an inhomogeneity within some larger structure, what might be a sign of it? If the parameters of fundamental physics crucial to life – just the ones we’ve been discussing! – vary from place to place, and most places are uninhabitable, there would be a signature to expect. We should expect to find that some of these parameters appear very peculiar – highly nongeneric – from the point of view of fundamental theory, and that relatively small changes in their values would preclude the existence of intelligent observers. Weinberg has made a case that the value of the cosmological term Lambda fits this description;and I’m inclined to think that ... and several other combinations of the small number of ingredients in our reduced description of matter and astrophysics do too. A fascinating set of questions is suggested here, that deserves careful attention.
There some aspects of QCD I find deeply troubling – though I’m not sure if I should!
I find it disturbing that it takes vast computer resources, and careful limiting procedures, to simulate the mass and properties of a proton with decent accuracy. And for real-time dynamics, like scattering, the situation appears pretty hopeless. Nature, of course, gets such results fast and effortlessly. But how, if not through some kind of computation, or a process we can mimic by computation?
Does this suggest that there are much more powerful forms of computation that we might aspire to tap into? Does it connect to the emerging theory of quantum computers? These musings suggest some concrete challenges: Could a quantum computer calculate QCD processes efficiently? Could it defeat the sign problem, that plagues all existing algorithms with dynamical fermions? Could it do real-time dynamics, which is beyond the reach of existing, essentially Euclidean, methods?
Or, failing all that, does it suggest some limitation to the universality of computation?
Deeply related to this is another thing I find disturbing. If you go to a serious mathematics book and study the rigorous construction of the real number system, you will find it is quite hard work and cumbersome. QCD, and for that matter the great bulk of physics starting with classical Newtonian mechanics, has been built on this foundation. In practice, it functions quite smoothly. It would be satisfying, though, to have a “more reduced” description, based on more primitive, essentially discrete structures. Fredkin and recently Wolfram have speculated at length along these lines. I don’t think they’ve got very far, and the difficulties facing such a program are immense. But it’s an interesting issue.
But all this progress should not mark an end. Rather it allows us to ask – that’s easy enough! – and (more impressive) to take meaningful, concrete stabs at answering some truly awesome questions. Do all the fundamental interactions derive from a single underlying principle? What is the quantum symmetry of space-time? To what extent are the laws of physics uniquely determined? Why is there any (baryonic) matter at all? What makes the dark matter? Why is there so little dark energy, compared to what it “should” be? Why is there so much, compared to everything else in the Universe? These are not merely popularizations or vulgarizations but genuine, if schematic, descriptions of a few of our ongoing explorations.
TThis is a broad and in places unconventional overview of the strengths and shortcomings of our standard models of fundamental physics and of cosmology. The emphasis is on ideas that have accessible experimental consequences. It becomes clear that the frontiers of these subjects share much ground in common.
..., one set of exogenous parameters in the standard model of cosmology specifies a few average properties of matter, taken over large spatial volumes. These are the densities of ordinary matter (i.e., of baryons), of dark matter, and of dark energy.
We know quite a lot about ordinary matter, of course, and we can detect it at great distances by several methods. It contributes about 3% of the total density.
Concerning dark (actually, transparent) matter we know much less. It has been “seen” only indirectly, through the influence of its gravity on the motion of visible matter. We observe that dark matter exerts very little pressure, and that it contributes about 30% of the total density.
Finally dark (actually, transparent) energy contributes about 67% of the total density. It has a large negative pressure. From the point of view of fundamental physics this dark energy is quite mysterious and disturbing, as I’ll elaborate shortly below.
Perhaps not quite so sharply posed, but still very promising, is the problem of the origin of the highest energy cosmic rays. It remains controversial whether there so many events observed at energies above those where protons or photons could travel cosmological distances that explaining their existence requires us to invoke new fundamental physics. However this plays out, we clearly have a lot to learn about the compositions of these events, their sources, and the acceleration mechanisms.
The observed values of the ratios ...[formulas for the cosmological density ratios] are extremely peculiar from the point of view of fundamental physics, as currently understood. Leading ideas from fundamental theory about the origin of dark matter and the origin of baryon number ascribe them to causes that are at best very remotely connected, and existing physical ideas about the dark energy, which are sketchy at best, don’t connect it to either of the others. Yet the ratios are observed to be close to unity. And the fact that these ratios are close to unity is crucial to cosmic ecology; the world would be a very different place if their values were grossly
different from what they are.
Several physicists, among whom S. Weinberg was one of the earliest and remains among the most serious and persistent, have been led to wonder whether it might be useful, or even necessary, to take a different approach, invoking anthropic reasoning. Many physicists view such reasoning as a compromise or even a betrayal of the goal of understanding the world in rational, scientific terms. Certainly, some adherents of the “Anthropic Principle” have overdone it. No such “Principle” can substitute for deep principles like symmetry and locality, which support a vast wealth of practical and theoretical applications, or the algorithmic
description of Nature in general. But I believe there are specific, limited circumstances in which anthropic reasoning is manifestly appropriate and unavoidable.
It has become conventional to say that our knowledge of fundamental physical law is summarized in a Standard Model. But this convention lumps together two quite different conceptual structures, and leaves out another. I think it is more accurate and informative to say that our current, working description of fundamental physics is based on three standard conceptual systems. These systems are very different; so different, that it is not inappropriate to call them the Good, the Bad, and the Ugly. They concern, respectively, the coupling of vector gauge particles, gravitons, and Higgs particles. It is quite a remarkable fact, in itself, that every nonlinear interaction we need to summarize our present knowledge of the basic (i.e., irreducible) laws of physics involves one or another of these particles.
Looking critically at the structure of a single standard model family, as displayed in Figure 3, one has no trouble picking out flaws.
The gauge symmetry contains three separate pieces, and the fermion representation contains five separate pieces. While this is an amazingly tight structure, considering the wealth of phenomena described, it clearly fails to achieve the ultimate in simplicity and irreducibility. Let me remind you, in this context, that electroweak “unification” is something of a misnomer. There are still two separate symmetries, and two separate coupling constants, in the electroweak sector of the standard model. It is much more accurate to speak of electroweak “mixing”.
Worst of all, the abelian U(1) symmetry is powerless to quantize its corresponding charges. The hypercharge assignments – indicated in Figure 3 by the numerical subscripts – must be chosen on purely phenomenological grounds. On the face of it, they appear in a rather peculiar pattern. If we are counting continuous parameters, the freedom to choose their values takes us from three to seven (and more, if we restore the families). The electrical neutrality of atoms is a striking and fundamental fact, which has been checked to extraordinary precision, and which is central to our understanding of Nature. In the standard model this fact appears, at a classical level, to require finely tuned hand-adjustment.
What makes this very tight, predictive, and elegant theory of quantum gravity “bad” is not that there is any experiment that contradicts it. There isn’t. Nor, I think, is the main problem that this theory cannot supply predictions for totally academic thought experiments about ultrahigh energy behavior. It can’t, but there are more pressing issues, that might have more promise of leading to contact between theory and empirical reality.
A great lesson of the standard model is that what we have been evolved to perceive as empty space is in fact a richly structured medium. It contains symmetry-breaking condensates associated with electroweak superconductivity and spontaneous chiral symmetry breaking in QCD, an effervescence of virtual particles, and probably much more. Since gravity is sensitive to all forms of energy it really ought to see this stuff, even if we don’t. A straightforward estimation suggests that empty space should weigh several orders of magnitude of orders of magnitude (no misprint here!) more than it does. It “should” be much denser than a neutron star, for example. The expected energy of empty space acts like dark energy, with negative pressure, but there’s much too much of it.
To me this discrepancy is the most mysterious fact in all of physical science, the fact with the greatest potential to rock the foundations. We’re obviously missing some major insight here. Given this situation, it’s hard to know what to make of the ridiculously small amount of dark energy that presently dominates the Universe!
We know of no deep principle, comparable to gauge symmetry or general covariance, which constrains the values of these couplings tightly. For that reason, it is in this sector where continuous parameters proliferate, into the dozens. Basically, we introduce each observed mass and weak mixing angle as an independent input, which must be determined empirically. The phenomenology is not entirely out of control: the general framework (local relativistic quantum field theory, gauge symmetry, and renormalizability) has significant consequences, and even this part of the standard model makes many non-trivial predictions and is highly over-constrained. ...
Neutrino masses and mixings can be accommodated along similar lines, if we expand the framework slightly. ... The flavor/Higgs sector of fundamental physics is its least satisfactory part. Whether measured by the large number of independent parameters or by the small number of powerful ideas it contains, our theoretical description of this sector does not attain the same level as we’ve reached in the other sectors. This part really does deserve to be called a “model” rather than a “theory”.
Finally let me mention one redeeming virtue of the Higgs sector. (“Virtue” might be too strong; actually, what I’m about to do is more in the nature of advertising a bug as a feature.)
The character of jets is dominated by the influence of intrinsically nonabelian gauge dynamics. These proven insights into fundamental physics ramify in many directions, and are far from being exhausted. I will discuss three rewarding explorations from my own experience, whose point of departure is the hard Yang-Mills interaction, and whose end is not yet in sight. Given an insight so profound and fruitful as Yang and Mills brought us, it is in order to try to consider its broadest implications, which I attempt at the end.
Physicists usually have a nonchalant attitude when the number of dimensions is extended to infinity. Optimism is the rule, and every infinite sequence is presumed to be convergent, unless proven guilty.
A slightly different perspective on renormalizability is associated with the philosophy of effective field theory. According to this philosophy it is presumptuous, or at least unnecessarily committal, to demand that our theories be self-contained up to arbitrarily large energies. So we should not demand that the effect of a high-mass cutoff, which marks the breakdown of our effective theory, can be removed entirely. Instead, we acknowledge that new degrees of freedom may open up at the large mass scale, and we postulate only that these degrees of freedom approximately decouple from low-scale physics. By requiring that the effective theory they leave behind should be self-contained and approximately valid up to the high mass scale, we are then led to a similar "effective" veto, which outlaws quantitatively significant nonrenormalizable couplings.
Of course, this philosophy only puts off the question of consistency, passing that burden on to the higher mass-scale theory. Presumably this regress must end somewhere, either in a fully consistent quantum field theory or in something else (string theory?).
Most theoretical physicists today have come around to the point of view that the standard model of which we're so proud, the quantum field theory of weak, electromagnetic and strong interactions, is nothing more than a low energy approximation to a much deeper and quite different underlying field theory
If there are to be simple explanations for complex phenomena, what form can they take?
One archetype is symmetry. In fundamental physics, especially in the twentieth century, symmetry has been the most powerful and fruitful guiding principle. By tying together the description of physical behavior in many different circumstances  at different places, at different times, viewed at different speeds and, of course, in different gauges!  it allows us to derive a wealth of consequences from our basic hypotheses. When combined with the principles of quantum theory, symmetry imposes very stringent consistency requirements, as we have discussed, leading to tight, predictive theories, of which Yang-Mills theory forms the archetype within the archetype.
(In the present formulation of physics quantum theory itself appears as a set of independent principles, which loosely define a conceptual framework. It is not absurd to hope that in the future these principles will be formulated more strictly, in a way that involves symmetry deeply.)
A different archetype, which pervades biology and cosmology, is the unfolding of a program. Nowadays we are all familiar with the idea that simple computer programs, unfolded deterministically according to primitive rules, can produce fantastically complicated patterns, such as the Mandelbrot set and other fractals; and with the idea that a surprisingly small library of DNA code directs biological development.
These archetypes are not mutually exclusive. ConwayÂs Game of Life, for example, uses simple, symmetric, deterministic rules, always and everywhere the same; but it can, operating on simple input, produce extremely complex, yet highly structured output.
In fundamental physics to date, we have mostly got along without having to invoke partial unfolding of earlier, primary simplicity as a separate explanatory principle. In constructing a working model of the physical world, to be sure, we require specification of initial conditions for the fundamental equations. But we have succeeded in paring these initial conditions down to a few parameters describing small departures from space-time homogeneity and thermal equilibrium in the very early universe; and the roles of these two aspects of world-construction, equations and initial conditions, have remained pretty clearly separated. Whether symmetry will continue to expand its explanatory scope, giving rise to laws of such power that their solution is essentially unique, thus minimizing the role of initial conditions; or whether Âfundamental parameters (e.g., quark and lepton masses and mixing angles) in fact depend upon our position within an extended, inhomogeneous Multiverse, so that evolutionary and anthropic considerations will be unavoidable; or whether some deeper synthesis will somehow remove the separation, is a great question for the future.
These pictures make it clear and tangible that the quantum vacuum is a dynamic medium, whose properties and responses largely determine the behavior of matter. ... The masses of hadrons, then, are uniquely associated to tones emitted by the dynamic medium of space when it disturbed in various ways ... We thereby discover, in the reality of masses, an algorithmic, precise Music of the Void. It is a modern embodiment of the ancients’ elusive, mystical “Music of the Spheres”.
Asymptotic freedom was developed as a response to two paradoxes: the weirdness of quarks, and in particular their failure to radiate copiously when struck; and the coexistence of special relativity and quantum theory, despite the apparent singularity of quantum field theory. It resolved these paradoxes, and catalyzed the development of several modern paradigms: the hard reality of quarks and gluons, the origin of mass from energy, the simplicity of the early universe, and the power of symmetry as a guide to physical law.
In theoretical physics, paradoxes are good. That’s paradoxical, since a paradox appears to be a contradiction, and contradictions imply serious error. But Nature cannot realize contradictions. When our physical theories lead to paradox we must find a way out. Paradoxes focus our attention, and we think harder.
Powerful interactions ought to be associated with powerful radiation. When the most powerful interaction in nature, the strong interaction, did not obey this rule, it posed a sharp paradox.
The second paradox is more conceptual. Quantum mechanics and special relativity are two great theories of twentieth-century physics. Both are very successful. But these two theories are based on entirely different ideas, which are not easy to reconcile. In particular, special relativity puts space and time on the same footing, but quantum mechanics treats them very differently. This leads to a creative tension, whose resolution has led to three previous Nobel Prizes (and ours is another).
So we had the paradox, that combining quantum mechanics and special relativity seemed to lead inevitably to quantum field theory; but quantum field theory, despite sub-stantial pragmatic success, self-destructed logically due to catastrophic screening.
The Schwarzschild solution is used to find the exact relativistic motion of a payload in the gravitational field of a mass moving with constant velocity. At radial approach or recession speeds faster than 3^-1/2 times the speed of light, even a small mass gravitationally repels a payload. At relativistic speeds, a suitable mass can quickly propel a heavy payload from rest nearly to the speed of light with negligible stresses on the payload.
We introduce a quantum mechanical model of time travel which includes two figurative beam splitters in order to induce feedback to earlier times. This leads to a unique solution to the paradox where one could kill one's grandfather in that once the future has unfolded, it cannot change the past, and so the past becomes deterministic. On the other hand, looking forwards towards the future is completely probabilistic. This resolves the classical paradox in a philosophically satisfying manner.
Paul Driscoll uses a time machine to try and change three past events: the bombing of Hiroshima, Hitler's rise to power and the sinking of the Lusitania. He fails miserably at all of them, and decides to escape to the past. He picks Homeville, Indiana. After learning from a history book he's brought along that a fire, started by runaway horses, will burn down a school and injure several children. He sees the wagon with the horses, and in trying to convince the owner to unhitch them, he frightens the horses and they start the fire. Driscoll returns to the present, content to leave the past alone.
For years, cosmologists have been racing each other to develop ever more sophisticated and realistic models of the evolution of the Universe. The competition has just become considerably stiffer.