Thursday, June 30, 2005

Saddam Hussein the Literary Lion Roars Again



Article in the New York Times on Saddam Hussein's latest literary effort. Doubt I'll read it (it's only available in Arabic at the moment) but the cover is cool.

Wednesday, June 29, 2005

Fire Ants Start a Sexual Revolution


Selfish genes. Fire ant queens and males fight the battle to propagate their genes by reproducing clonally.
CREDIT: Hawaii State Department of Agriculture


Do females fire ants belong to a different species than the males?

News report inScience.

Nature news:
Males pit their genes against females by chucking DNA out of eggs.
The sperm of the male ant appears to be able to destroy the female DNA within a fertilized egg, giving birth to a male that is a clone of its father. Meanwhile the female queens make clones of themselves to carry on the royal female line


From Evolutionary biology: Males from Mars in Nature
In an ant species — or is it two species? — females are produced only by females and males only by males. Explanations of this revelation have to invoke some decidedly offbeat patterns of natural selection.


Apparently in this species of fire ant (which is a quite common pest), queens pass on their genes to daughter queens without mixing in male genes - this is not the case in most ant species. Males somehow managed to take revenge, sons contain only the genes of their fathers. However workers, which are sterile, are still reproduced sexually! Bizarre.

Thursday, June 23, 2005

Why does the Moon appear larger when it's on the horizon?


We just had a spectacular June moon conincident with the summer solstice - see this article on the BBC for example.

Why does the moon (or the sun) look bigger when it's on the horizon? First of all, the image of the moon isn't actually smaller when it's high in the sky. Compare it to a coin held at arm's length when it's high and when it's low and you'll see it's the same. (Though the moon can be bigger on particular days, such as we had recently).

The explanation is that we judge the moon to be futher away when it is on the horizon, because we have objects on the horizon for comparison. See this page at IBM for more details.

Brain Scans of Female Orgasms

The BBC and numerous other sites reported Dutch research of brain scanning couples having orgasms. The couples took turns manually stimulating each other while one partner had their head in a brain scan machine. First, there was little to report about male orgasms, they were too brief to register on current scanning devices. Also cold feet were a problem - literally. Only 50% of the couples were able to achieve orgasm with bare feet, while 80% could come wearing socks. Definitely something to keep in mind.
As it turns out, no particular part of the brain was activated during female orgasm - but there were definitely large parts of the brain deactivated. Specifically the parts of the brain associated with higher mental functions shut down, but also parts of the brain which are associated with the emotion of fear.
The women were also asked to fake an orgasm. The researchers reported they were quite convincing to the observer, but the difference was obvious on the brain scans.


Genuine orgasm: less brain actvity


Tell tale brain activity in a fake orgasm

a 'Jennifer Aniston' cell?

In Nature this week, there's a report and a News and Views article supporting the previously dismissed idea of a "grandmother cell".
How do neurons in the brain represent movie stars, famous buildings and other familiar objects? Rare recordings from single neurons in the human brain provide a fresh perspective on the question.

'Grandmother cell' is a term coined by J. Y. Lettvin to parody the simplistic notion that the brain has a separate neuron to detect and represent every object (including one's grandmother). The phrase has become a shorthand for invoking all of the overwhelming practical arguments against a one-to-one object coding scheme. No one wants to be accused of believing in grandmother cells. But on page 1102 of this issue, Quiroga et al.3 describe a neuron in the human brain that looks for all the world like a 'Jennifer Aniston' cell. Ms Aniston could well become a grandmother herself someday. Are vision scientists now forced to drop their dismissive tone when discussing the neural representation of matriarchs?


A specific neuron responded to pictures of Ms. Aniston


But not however, when she was depicted with Brad Pitt
(watch that hand, Jennifer!)

Astronomical pacing of late Palaeocene to early Eocene global warming events

Letter in Nature.

The Palaeocene−Eocene thermal maximum was a strong global warming event about 55 million years ago. The authors discovered another, somewhat smaller warming episode which occurred about 2 million years later. They related these events to periodic changes in Earth's orbital parameters. See also my previous blog entry on the Paleo/Eocene event.

Saturday, June 18, 2005

Handbag Studio by Thomas Keneally


Thomas Keneally on how Schindler's list found him in Beverly Hills in Granta. Nice story.

Friday, June 17, 2005

The Story of O2


Banded Iron Formation

There's a really nice article in this week's Science about the history of gaseous oxygen in the earth's atmosphere. Everyone agreed that the earth started out without any gaseous diatomic oxygen. There was some debate about when free oxygen first appeared. Minerals older than 2.4 billion years or so showed no signs of oxidized iron (rust). But some researchers believed there was evidence of atmospheric oxygen even earlier. But now a detailed analysis of atmospheric sulfur chemistry supports the later date for oxygen. So there seems to have been a "Great Oxidation Event" around 2.4 billion years ago in which banded iron formations were laid down around the world. Photosynthetic algae appear at around 2.7 billion years ago and perhaps at that point they were producing enough oxygen to create the iron formations.

In the same issue of Science, there's also an article with evidence that the earth held on to its gaseous hydrogen for longer than previously thought, which would help explain some of the history of early life on the planet.

Neutrinos have Mass!


1987A supernova remnant near the center

Neutrinos are produced in several different ways: natural radioactivity here on earth; nuclear reactors; in copious quantities in the Sun and other stars; in supernovas (Wikipedia SN1987A article); and in the Big Bang (though these have yet to be detected, there's indirect evidence here Neutrino ripples spotted in space). In all cases they are ghostly particles, very hard to detect.

The Super-Kamiokande detector, half-filled with pure water. (Courtesy of the Institute for Cosmic Ray Research, the University of Tokyo.)

They were postulated theoretically by Wolfgang Pauli, but it took many years to actually find them.
For a long time one of the great mysteries of physics was the "Solar Neutrino Problem" - the actual number of neutrinos we could detect from the Sun was about a third of the expected flux. This was finally resolved by oscillations among the three types of neutrinos (hence the 1/3). This seems to imply, theoretically, that neutrinos have mass - which has yet to be measured, though it must be very, very small. Until the oscillations were discovered, it was usually assumed that neutrinos were massless, for both theoretical and experimental reasons.
Here are two "popular" articles by John N. Bahcall of Princeton:
Solar Neutrinos: A Popular Account and Solving the Mystery of the Missing Neutrinos
The three years 2001 to 2003 were the golden years of solar neutrino research. In this period, scientists solved a mystery with which they had been struggling for four decades. The solution turned out to be important for both physics and for astronomy. In this article, I tell the story of those fabulous three years.

Here's a recent, more technical review of Neutrino Physics in general by Boris Kayser.

There are many interesting questions about neutrinos which still need to be answered by new experiments (from Kayser).

1. How many neutrino species are there? Are there sterile neutrinos?
There's already an experiment (LSND) which suggests that there might be more than three types of neutrinos. If there are more than three types then there are combinations which do not even couple to the weak nuclear force - they would only interact via gravity. These are the so-called sterile neutrinos.

2. Are neutrinos their own antiparticles?
Charged particles cannot be their own antiparticle, an antiparticle must have the opposite electric charge. But neutrinos are, naturally, electrically neutral, so it's possible. It could be, however, that there's another conserved quantity the "lepton number". It's not clear whether there is such a quantity, but if there is, there thould be distinct antineutrinos.

3. Do neutrino interactions violate CP (charge/parity conservation)?
Why does the universe seem to be made almost entirely of matter, rather than equal amounts of matter and antimatter? CP violation occurs in quarks, but that doesn't appear to be sufficient to explain the observed matter/antimatter imbalance. Since we are made of matter not antimatter, perhaps neutrino interactions are the reason we exist!

Let us not forget neutrino poetry, by John Updike no less.

Recasting Mermin's multi-player game into the framework of pseudo-telepathy

Preprint by Gilles Brassard, Anne Broadbent, Alain Tapp
Entanglement is perhaps the most non-classical manifestation of quantum mechanics. Among its many interesting applications to information processing, it can be harnessed to reduce the amount of communication required to process a variety of distributed computational tasks. Can it be used to eliminate communication altogether? Even though it cannot serve to signal information between remote parties, there are distributed tasks that can be performed without any need for communication, provided the parties share prior entanglement: this is the realm of pseudo-telepathy.
One of the earliest uses of multi-party entanglement was presented by Mermin in 1990. Here we recast his idea in terms of pseudo-telepathy: we provide a new computer-scientist-friendly analysis of this game. We prove an upper bound on the best possible classical strategy for attempting to play this game, as well as a novel, matching lower bound. This leads us to considerations on how well imperfect quantum-mechanical apparatus must perform in order to exhibit a behaviour that would be classically impossible to explain. Our results include improved bounds that could help vanquish the infamous detection loophole.

In Mermin's game, n players are given a simple task which definitely requires communication in a classical setting. By using quantum techniques (involving the notorious concept of entanglement), the task can be performed without any communication at all - hence the "pseudo-telepathy".

The problem, by the way is very simple. It requires three or more players. Each is given an input bit (0 or 1). They are promised that the total number of input 1's is even. They each then produce an output bit. If the sum of the input bits is divisible by four, there should be an odd number of output 1's. If the sum of the input bits is not divisible by four, the sum of output bits should be divisible by two (an even number of output 1's).

Chi-hwa-seon a film by Im Kwon Taek



The Korean film festival continues with this period piece about the life of the 19th century Korean painter Ohwon. Beautiful photography but a somewhat chaotic story of a chaotic life and times.

By the way, in the shot on the cover of the DVD, I believe the characters are depicted in sexual intercourse. It's very hard to tell what's going on beneath those elaborate Korean costumes!



Chunhyang is another even more beautiful film by the same director.

Thursday, June 16, 2005

The Isle, a film by Ki-duk Kim



A beautiful, cruel and disturbing Korean film. Pretty little fishing huts atop small barges are moored on an idyllic lake. The mute female proprietor provides fishing tackle, coffee and sex to visiting fishermen. Serial fish hook abuse ensues.

Sunday, June 12, 2005

Redback Spiders

Andrade's web page is fascinating, not to mention that she must be most attractive spider copulation researcher imaginable!

Maydianne C.B. Andrade

Much of my current research involves studies of the sexually cannibalistic Australian redback spider (Latrodectus hasselti), and its close relatives, the black widows (genus Latrodectus). Redback spiders are intriguing because males actively 'encourage' females to cannibalize them while they mate. Unlike most other sexually cannibalistic species (e.g., praying mantids) where males attempt to escape from the female's jaws, redback males actually 'somersault' onto the female's mouthparts during copulation = male sexual sacrifice.


The copulatory somersault
(A) Copulation begins with the male standing on the female's abdomen. Both spiders are facing in the same direction and are 'belly to belly'. The male has two copulatory organs (the palps) that are attached at the anterior-most part of his 'head' (cephalothorax). Copulation begins when one of the palps is inserted into the female's genital opening. In most other black widow spiders, the pair copulates while in this posture.

(B) In redbacks, however, a few seconds after palp insertion, the male, using the palp as a pivot, moves into a 'headstand' posture.

(C) The male then quickly turns through 180 degrees, landing with his 'back' (the dorsal surface of the abdomen), directly above the female's fangs. In most matings, the female begins to extrude digestive enzymes almost immediately. She also pierces the male's abdomen with her fangs and begins to consume him while he is transferring sperm.

There's a video of this on her web site.
A previous spider mating post

Friday, June 10, 2005

Appearance DOES Matter


Which person is more babyfaced?
FROM TODOROV et al.
Perspective in Science.
Take a look at these two snapshots. Which man is more babyfaced? Most viewers would say it's the person on the right. And that's the person who lost a 2004 U.S. congressional election to his more mature-faced and competent-looking opponent. In fact, about 70% of recent U.S. Senate races were accurately predicted based on which candidates looked more competent from a quick glance at their faces. This remarkable effect, reported by Todorov et al. on page 1623 of this issue, likely reflects differences in "babyfacedness". A more babyfaced individual is perceived as less competent than a more mature-faced, but equally attractive, peer of the same age and sex. Although we like to believe that we "don't judge a book by its cover," superficial appearance qualities such as babyfacedness profoundly affect human behavior in the blink of an eye.

Rapid Acidification of the Ocean During the Paleocene-Eocene Thermal Maximum

According to the wikipedia entry "The end of the Paleocene (55.5/54.8 Ma) was marked by one of the most significant periods of global change during the Cenozoic, a sudden global change, the Paleocene-Eocene Thermal Maximum, which upset oceanic and atmospheric circulation and led to the extinction of numerous deep-sea benthic foraminifera and on land, a major turnover in mammals."

report in Science. Summary:
A rapid and large global warming event, the Paleocene-Eocene Thermal Maximum (PETM), raised interior ocean temperatures by 4º to 5ºC around 55 million years ago, a rise not equaled in any single event since then. This warming, whose origin is still debated, was accompanied by a dramatic negative carbon isotopic excursion. One hypothesis is that the release of 2000 gigatons of carbon from the destabilization of methane clathrates on the sea floor account for both the carbon isotopic signal and the temperature increase. Zachos et al. (p. 1611) now show that the carbonate compensation depth (roughly the depth at which calcium carbonate is no longer found in the sediment, because of dissolution during sinking) of the ocean rose by more than 2 kilometers during the PETM, which could have happened only if the amount of CO2 added to the ocean was much more than that which has been estimated in the clathrate scenario. They find that 4000 gigatons of carbon would have been needed, so the release of clathrates alone could not have been the cause of the warming.

Art of Science Competition at Princeton


Driven
Anton Darhuber, Benjamin Fischer and Sandra Troian
Microfluidic Research and Engineering Laboratory, Department of Chemical Engineering
SECOND PRIZE WINNER
This image illustrates evolving dynamical patterns formed during the spreading of a surface-active substance (surfactant) over a thin liquid film on a silicon wafer. After spin-coating of glycerol, small droplets of oleic acid were deposited. The usually slow spreading process was highly accelerated by the surface tension imbalance that triggered a cascade of hydrodynamic instabilities. Such surface-tension driven flow phenomena are believed to be important for the self-cleaning mechanism of the lung as well as pulmonary drug delivery.

gallery page at Princeton.

DNA of Deadbeat Voles May Hint at Why Some Fathers Turn Out to Be Rats


Larry J. Young/Yerkes National Primate Research Center
Prairie voles on gels of their DNA, a possible clue to their home life.

Excellent article in today's New York Times.
Some male prairie voles are devoted fathers and faithful partners, while others are less satisfactory on both counts. The spectrum of behavior is shaped by a genetic mechanism that allows for quick evolutionary changes, two researchers from Emory University report in today's issue of Science.
...
People have the same variability in their DNA, with a control section that comes in at least 17 lengths detected so far, Dr. Young said.
...
The Emory researchers recently noticed that in their prairie vole colony, some fathers spent more time with their pups and some less. They traced the source of this variability to its molecular roots, a variation in the length of the DNA region that controls a certain gene.

This is the gene for the vasopressin receptor, the device used by neurons to respond to vasopressin. Voles with long and short DNA segments had different patterns of vasopressin receptors in their brains, which presumably changed their response to the hormone.

In Voles, a Little Extra DNA Makes for Faithful Mates news article in Science.
Prairie voles are renowned for being faithful mates, but some individuals are more faithful than others. The difference may lie in their so-called junk DNA.

... Elizabeth Hammock and Lawrence Young of Emory University in Atlanta, Georgia, report that fidelity and other social behaviors in male prairie voles seem to depend on the length of a particular genetic sequence in a stretch of DNA between their genes. The longer this repetitive sequence, or microsatellite, the more attentive males were to their female partner and their offspring. Those with shorter microsatellites neglected their mates and pups, at least to some degree.

Although there's no evidence that human infidelity or poor parenting stems from similar variations, Hammock and Young, as well as other researchers, have begun to explore whether microsatellites can account for behavioral differences between people and primates such as chimps and bonobos.

Microsatellite Instability Generates Diversity in Brain and Sociobehavioral Traits: the research article on vole genetics and behaviour in today's Science Magazine.

Here is Young's web page at Emory University

Thursday, June 09, 2005

QCD and Natural Philosophy

by Frank Wilczek. This is getting embarrassing! The embarrassment of Wilczek riches continues with this 2002 preprint at the physics archive.

His take on the "Reductionist Program"
The reductionist program, roughly speaking, is to build up the description of Nature from a few laws that govern the behavior of elementary entities, and that can't be derived from anything simpler. This definition is loose at both ends.
On the output side, because in the course of our investigations we find that there are important facts about Nature that we have to give up on predicting. These are what we - after the fact! - come to call contingencies. Three historically important examples of different sorts are the number of planets in the Solar System, the precise moment that a radioactive nucleus will decay, or what the weather will be in Boston a year from today. In each of these cases, the scientific community at first believed that prediction would be possible. And in each case, it was a major advance to realize that there are good fundamental reasons why it is not possible.
On the input side, because it is difficult - perhaps impossible - ever to prove the non-existence of simpler principles. I'll revisit this aspect at the end of the lecture.
Nevertheless, and despite its horrible name, the reductionist program has been and continues to be both inspiring and astoundingly successful. Instead of trying to refine an arbitrary a priori definition of this program, it is more edifying to discuss its best fruits. For beyond question they do succeed to an astonishing extent in "reducing" matter to a few powerful abstract principles and a small number of parameters.

Cosmology
Although it is usually passed over in silence, I think it is very important philosophically, and deserves to be emphasized, that our standard cosmology is radically modest in its predictive ambitions. It consigns almost everything about the world as we find it to contingency. That includes not only the aforementioned question of the number of planets in the Solar System, but more generally every specific fact about every specific object or group of objects in the Universe, apart from a few large-scale statistical regularities. Indeed, specific structures are supposed to evolve from the primordial perturbations, and these are only characterized statistically. In inflationary models these perturbations arise as quantum fluctuations, and their essentially statistical character is a consequence of the laws of quantum mechanics.
This unavoidably suggests the question whether we might find ourselves forced to become even more radically modest. Let us suppose for the sake of argument the best possible case, that we had in hand the fundamental equations of physics. Some of my colleagues think they do, or soon will. Even then we have to face the question of what principle determines the solution of these equations that describes the observed Universe. Let me again suppose for the sake of argument the best possible case, that there is some principle that singles out a unique acceptable solution. Even then there is a question we have to face: If the solution is inhomogeneous, what determines our location within it?

Another rationalization for considering the Anthropic Principle
As we have just discussed, the laws of reductionist physics do not suffice to tell us about the specific properties of the Sun, or of Earth. Indeed, there are many roughly similar but significantly different stars and planets elsewhere in the same Universe. On the other hand, we can aspire to a rational, and even to some extent quantitative, “derivation” of the parameters of the Sun and Earth based on fundamental laws, if we define them not by pointing to them as specific objects – that obviates any derivation – but rather by characterizing broad aspects of their behavior.
In principle any behavior will do, but possibly the most important and certainly the most discussed is their role in supporting the existence of intelligent observers, the so-called anthropic principle. There are many peculiarities of the Sun and Earth that can be explained this way. A crude example is that the mass of the Sun could not be much bigger or much smaller than it actually is because it would burn out too fast or not radiate sufficient energy, respectively.
Now if the Universe as we now know it constitutes, like the Solar System, an inhomogeneity within some larger structure, what might be a sign of it? If the parameters of fundamental physics crucial to life – just the ones we’ve been discussing! – vary from place to place, and most places are uninhabitable, there would be a signature to expect. We should expect to find that some of these parameters appear very peculiar – highly nongeneric – from the point of view of fundamental theory, and that relatively small changes in their values would preclude the existence of intelligent observers. Weinberg has made a case that the value of the cosmological term Lambda fits this description;and I’m inclined to think that ... and several other combinations of the small number of ingredients in our reduced description of matter and astrophysics do too. A fascinating set of questions is suggested here, that deserves careful attention.

The Unreasonable Ineffectiveness of QCD

There some aspects of QCD I find deeply troubling – though I’m not sure if I should!
I find it disturbing that it takes vast computer resources, and careful limiting procedures, to simulate the mass and properties of a proton with decent accuracy. And for real-time dynamics, like scattering, the situation appears pretty hopeless. Nature, of course, gets such results fast and effortlessly. But how, if not through some kind of computation, or a process we can mimic by computation?
Does this suggest that there are much more powerful forms of computation that we might aspire to tap into? Does it connect to the emerging theory of quantum computers? These musings suggest some concrete challenges: Could a quantum computer calculate QCD processes efficiently? Could it defeat the sign problem, that plagues all existing algorithms with dynamical fermions? Could it do real-time dynamics, which is beyond the reach of existing, essentially Euclidean, methods?
Or, failing all that, does it suggest some limitation to the universality of computation?
Deeply related to this is another thing I find disturbing. If you go to a serious mathematics book and study the rigorous construction of the real number system, you will find it is quite hard work and cumbersome. QCD, and for that matter the great bulk of physics starting with classical Newtonian mechanics, has been built on this foundation. In practice, it functions quite smoothly. It would be satisfying, though, to have a “more reduced” description, based on more primitive, essentially discrete structures. Fredkin and recently Wolfram have speculated at length along these lines. I don’t think they’ve got very far, and the difficulties facing such a program are immense. But it’s an interesting issue.

Inventory and Outlook of High Energy Physics

by Frank Wilczek. The list of well-written articles by Wilczek continues with this 2002 preprint at the physics archive.

"No Conclusion"
But all this progress should not mark an end. Rather it allows us to ask – that’s easy enough! – and (more impressive) to take meaningful, concrete stabs at answering some truly awesome questions. Do all the fundamental interactions derive from a single underlying principle? What is the quantum symmetry of space-time? To what extent are the laws of physics uniquely determined? Why is there any (baryonic) matter at all? What makes the dark matter? Why is there so little dark energy, compared to what it “should” be? Why is there so much, compared to everything else in the Universe? These are not merely popularizations or vulgarizations but genuine, if schematic, descriptions of a few of our ongoing explorations.

The Universe is a Strange Place

by Frank Wilczek. Continuing the Wilczek orgy - preprint in the physics archive. This time the subject is cosmology, instead of particle physics. The abstract:
TThis is a broad and in places unconventional overview of the strengths and shortcomings of our standard models of fundamental physics and of cosmology. The emphasis is on ideas that have accessible experimental consequences. It becomes clear that the frontiers of these subjects share much ground in common.


"Cosmology"
..., one set of exogenous parameters in the standard model of cosmology specifies a few average properties of matter, taken over large spatial volumes. These are the densities of ordinary matter (i.e., of baryons), of dark matter, and of dark energy.
We know quite a lot about ordinary matter, of course, and we can detect it at great distances by several methods. It contributes about 3% of the total density.
Concerning dark (actually, transparent) matter we know much less. It has been “seen” only indirectly, through the influence of its gravity on the motion of visible matter. We observe that dark matter exerts very little pressure, and that it contributes about 30% of the total density.
Finally dark (actually, transparent) energy contributes about 67% of the total density. It has a large negative pressure. From the point of view of fundamental physics this dark energy is quite mysterious and disturbing, as I’ll elaborate shortly below.

Cosmic Rays
Perhaps not quite so sharply posed, but still very promising, is the problem of the origin of the highest energy cosmic rays. It remains controversial whether there so many events observed at energies above those where protons or photons could travel cosmological distances that explaining their existence requires us to invoke new fundamental physics. However this plays out, we clearly have a lot to learn about the compositions of these events, their sources, and the acceleration mechanisms.

Is the Universe a Strange Place?
The observed values of the ratios ...[formulas for the cosmological density ratios] are extremely peculiar from the point of view of fundamental physics, as currently understood. Leading ideas from fundamental theory about the origin of dark matter and the origin of baryon number ascribe them to causes that are at best very remotely connected, and existing physical ideas about the dark energy, which are sketchy at best, don’t connect it to either of the others. Yet the ratios are observed to be close to unity. And the fact that these ratios are close to unity is crucial to cosmic ecology; the world would be a very different place if their values were grossly
different from what they are.
Several physicists, among whom S. Weinberg was one of the earliest and remains among the most serious and persistent, have been led to wonder whether it might be useful, or even necessary, to take a different approach, invoking anthropic reasoning. Many physicists view such reasoning as a compromise or even a betrayal of the goal of understanding the world in rational, scientific terms. Certainly, some adherents of the “Anthropic Principle” have overdone it. No such “Principle” can substitute for deep principles like symmetry and locality, which support a vast wealth of practical and theoretical applications, or the algorithmic
description of Nature in general. But I believe there are specific, limited circumstances in which anthropic reasoning is manifestly appropriate and unavoidable.

The highlighted opinion concurs with my own - for whatever that's worth. The Anthropic Principle seems like one of those things which get invoked when you run out of better ideas. That Weinberg and Wilczek, who have had many very good ideas indeed, are driven to this resort seems like a bad sign to me.

A Constructive Critique of the Three Standard Systems

by F. Wilczek. Preprint in the physics archive.
Abstract:
It has become conventional to say that our knowledge of fundamental physical law is summarized in a Standard Model. But this convention lumps together two quite different conceptual structures, and leaves out another. I think it is more accurate and informative to say that our current, working description of fundamental physics is based on three standard conceptual systems. These systems are very different; so different, that it is not inappropriate to call them the Good, the Bad, and the Ugly. They concern, respectively, the coupling of vector gauge particles, gravitons, and Higgs particles. It is quite a remarkable fact, in itself, that every nonlinear interaction we need to summarize our present knowledge of the basic (i.e., irreducible) laws of physics involves one or another of these particles.

More critical remarks on the Standard Model:
Looking critically at the structure of a single standard model family, as displayed in Figure 3, one has no trouble picking out flaws.
The gauge symmetry contains three separate pieces, and the fermion representation contains five separate pieces. While this is an amazingly tight structure, considering the wealth of phenomena described, it clearly fails to achieve the ultimate in simplicity and irreducibility. Let me remind you, in this context, that electroweak “unification” is something of a misnomer. There are still two separate symmetries, and two separate coupling constants, in the electroweak sector of the standard model. It is much more accurate to speak of electroweak “mixing”.
Worst of all, the abelian U(1) symmetry is powerless to quantize its corresponding charges. The hypercharge assignments – indicated in Figure 3 by the numerical subscripts – must be chosen on purely phenomenological grounds. On the face of it, they appear in a rather peculiar pattern. If we are counting continuous parameters, the freedom to choose their values takes us from three to seven (and more, if we restore the families). The electrical neutrality of atoms is a striking and fundamental fact, which has been checked to extraordinary precision, and which is central to our understanding of Nature. In the standard model this fact appears, at a classical level, to require finely tuned hand-adjustment.

Gravity
What makes this very tight, predictive, and elegant theory of quantum gravity “bad” is not that there is any experiment that contradicts it. There isn’t. Nor, I think, is the main problem that this theory cannot supply predictions for totally academic thought experiments about ultrahigh energy behavior. It can’t, but there are more pressing issues, that might have more promise of leading to contact between theory and empirical reality.
A great lesson of the standard model is that what we have been evolved to perceive as empty space is in fact a richly structured medium. It contains symmetry-breaking condensates associated with electroweak superconductivity and spontaneous chiral symmetry breaking in QCD, an effervescence of virtual particles, and probably much more. Since gravity is sensitive to all forms of energy it really ought to see this stuff, even if we don’t. A straightforward estimation suggests that empty space should weigh several orders of magnitude of orders of magnitude (no misprint here!) more than it does. It “should” be much denser than a neutron star, for example. The expected energy of empty space acts like dark energy, with negative pressure, but there’s much too much of it.
To me this discrepancy is the most mysterious fact in all of physical science, the fact with the greatest potential to rock the foundations. We’re obviously missing some major insight here. Given this situation, it’s hard to know what to make of the ridiculously small amount of dark energy that presently dominates the Universe!

The Flavor/Higgs Sector

We know of no deep principle, comparable to gauge symmetry or general covariance, which constrains the values of these couplings tightly. For that reason, it is in this sector where continuous parameters proliferate, into the dozens. Basically, we introduce each observed mass and weak mixing angle as an independent input, which must be determined empirically. The phenomenology is not entirely out of control: the general framework (local relativistic quantum field theory, gauge symmetry, and renormalizability) has significant consequences, and even this part of the standard model makes many non-trivial predictions and is highly over-constrained. ...
Neutrino masses and mixings can be accommodated along similar lines, if we expand the framework slightly. ... The flavor/Higgs sector of fundamental physics is its least satisfactory part. Whether measured by the large number of independent parameters or by the small number of powerful ideas it contains, our theoretical description of this sector does not attain the same level as we’ve reached in the other sectors. This part really does deserve to be called a “model” rather than a “theory”.

Another cute sarcastic remark (somewhat unfairly out of context):
Finally let me mention one redeeming virtue of the Higgs sector. (“Virtue” might be too strong; actually, what I’m about to do is more in the nature of advertising a bug as a feature.)

Longing for the Harmonies

by Frank Wilczek and Betsy Devine.
As I was reading the Wilczek preprints I thought "Wilczek is such a good writer he should do a book for the general public". Well he has, I have it, I've read it, but it was so long ago (1988), unfortunately I have only vague memories, but mildly positive ones.

Yang-Mills Theory In, Beyond, and Behind Observed Reality

by Frank Wilczek, preprint in the physics archive.
Abstract
The character of jets is dominated by the influence of intrinsically nonabelian gauge dynamics. These proven insights into fundamental physics ramify in many directions, and are far from being exhausted. I will discuss three rewarding explorations from my own experience, whose point of departure is the hard Yang-Mills interaction, and whose end is not yet in sight. Given an insight so profound and fruitful as Yang and Mills brought us, it is in order to try to consider its broadest implications, which I attempt at the end.


Wilczek is an excellent writer (see especially his Nobel lecture in the previous post). This article probably isn't quite as accessible to a general audience (as you can probably see from the abstract). The average reader is probably not familiar with "Yang-Mills" theories and the article doesn't try to explain them, but it's still worth a peek, if you can skim past any unfamiliar technicalities.

For example: "But as physicists hungry for answers, we properly regard strict mathematical rigor as a desirable luxury, not an indispensable necessity". Which might surprise non-physicists - mathematicians are often appalled by the pseudo-mathematical activities of theoretical physicists.

Here's another quote along the same lines, this time from "Quantum Theory: Concepts and Methods" by Asher Peres.
Physicists usually have a nonchalant attitude when the number of dimensions is extended to infinity. Optimism is the rule, and every infinite sequence is presumed to be convergent, unless proven guilty.

While a mathematician would probably treat an infinite sequence with extreme suspicion until it was proved to converge.

Our current theories of particle physics are not really viewed as anything more than "low-energy" approximations, they can't even be made to produce sensible answers at higher energies. Here's another quote from the Wilczek article:
A slightly different perspective on renormalizability is associated with the philosophy of effective field theory. According to this philosophy it is presumptuous, or at least unnecessarily committal, to demand that our theories be self-contained up to arbitrarily large energies. So we should not demand that the effect of a high-mass cutoff, which marks the breakdown of our effective theory, can be removed entirely. Instead, we acknowledge that new degrees of freedom may open up at the large mass scale, and we postulate only that these degrees of freedom approximately decouple from low-scale physics. By requiring that the effective theory they leave behind should be self-contained and approximately valid up to the high mass scale, we are then led to a similar "effective" veto, which outlaws quantitatively significant nonrenormalizable couplings.
Of course, this philosophy only puts off the question of consistency, passing that burden on to the higher mass-scale theory. Presumably this regress must end somewhere, either in a fully consistent quantum field theory or in something else (string theory?).


Here's another version on this theme, this time by Steven Weinberg in his 1986 Dirac Memorial Lecture Towards the Final Laws of Physics (which is included in the slim volume Elementary Particles and the Laws of Physics by Richard P. Feynman & Steven Weinberg - Feynman article is also very good).
Most theoretical physicists today have come around to the point of view that the standard model of which we're so proud, the quantum field theory of weak, electromagnetic and strong interactions, is nothing more than a low energy approximation to a much deeper and quite different underlying field theory


The concluding section of Wilczek's article is titled "Patterns of Explanation", it's especially nice:
If there are to be simple explanations for complex phenomena, what form can they take?
One archetype is symmetry. In fundamental physics, especially in the twentieth century, symmetry has been the most powerful and fruitful guiding principle. By tying together the description of physical behavior in many different circumstances – at different places, at different times, viewed at different speeds and, of course, in different gauges! – it allows us to derive a wealth of consequences from our basic hypotheses. When combined with the principles of quantum theory, symmetry imposes very stringent consistency requirements, as we have discussed, leading to tight, predictive theories, of which Yang-Mills theory forms the archetype within the archetype.
(In the present formulation of physics quantum theory itself appears as a set of independent principles, which loosely define a conceptual framework. It is not absurd to hope that in the future these principles will be formulated more strictly, in a way that involves symmetry deeply.)
A different archetype, which pervades biology and cosmology, is the unfolding of a program. Nowadays we are all familiar with the idea that simple computer programs, unfolded deterministically according to primitive rules, can produce fantastically complicated patterns, such as the Mandelbrot set and other fractals; and with the idea that a surprisingly small library of DNA code directs biological development.
These archetypes are not mutually exclusive. ConwayÂ’s Game of Life, for example, uses simple, symmetric, deterministic rules, always and everywhere the same; but it can, operating on simple input, produce extremely complex, yet highly structured output.
In fundamental physics to date, we have mostly got along without having to invoke partial unfolding of earlier, primary simplicity as a separate explanatory principle. In constructing a working model of the physical world, to be sure, we require specification of initial conditions for the fundamental equations. But we have succeeded in paring these initial conditions down to a few parameters describing small departures from space-time homogeneity and thermal equilibrium in the very early universe; and the roles of these two aspects of world-construction, equations and initial conditions, have remained pretty clearly separated. Whether symmetry will continue to expand its explanatory scope, giving rise to laws of such power that their solution is essentially unique, thus minimizing the role of initial conditions; or whether “fundamental” parameters (e.g., quark and lepton masses and mixing angles) in fact depend upon our position within an extended, inhomogeneous Multiverse, so that evolutionary and anthropic considerations will be unavoidable; or whether some deeper synthesis will somehow remove the separation, is a great question for the future.

Wednesday, June 08, 2005

Asymptotic Freedom: From Paradox to Paradigm

by Frank Wilczek. Lecture on receipt of the 2004 Nobel Prize in Physics.
Preprint at the physics archive.


QCD Lava Lamp (spontaneous quantum fluctuations in the gluon fields)
These pictures make it clear and tangible that the quantum vacuum is a dynamic medium, whose properties and responses largely determine the behavior of matter. ... The masses of hadrons, then, are uniquely associated to tones emitted by the dynamic medium of space when it disturbed in various ways ... We thereby discover, in the reality of masses, an algorithmic, precise Music of the Void. It is a modern embodiment of the ancients’ elusive, mystical “Music of the Spheres”.

Abstract
Asymptotic freedom was developed as a response to two paradoxes: the weirdness of quarks, and in particular their failure to radiate copiously when struck; and the coexistence of special relativity and quantum theory, despite the apparent singularity of quantum field theory. It resolved these paradoxes, and catalyzed the development of several modern paradigms: the hard reality of quarks and gluons, the origin of mass from energy, the simplicity of the early universe, and the power of symmetry as a guide to physical law.

A beautiful (and often very readable) account of one of the most recent (well, 1972) major advances in theoretical physics.
In theoretical physics, paradoxes are good. That’s paradoxical, since a paradox appears to be a contradiction, and contradictions imply serious error. But Nature cannot realize contradictions. When our physical theories lead to paradox we must find a way out. Paradoxes focus our attention, and we think harder.

"Paradox 1: Quarks are Born Free, but Everywhere They are in
Chains"
Powerful interactions ought to be associated with powerful radiation. When the most powerful interaction in nature, the strong interaction, did not obey this rule, it posed a sharp paradox.

"Paradox 2: Special Relativity and Quantum Mechanics Both Work"
The second paradox is more conceptual. Quantum mechanics and special relativity are two great theories of twentieth-century physics. Both are very successful. But these two theories are based on entirely different ideas, which are not easy to reconcile. In particular, special relativity puts space and time on the same footing, but quantum mechanics treats them very differently. This leads to a creative tension, whose resolution has led to three previous Nobel Prizes (and ours is another).

So we had the paradox, that combining quantum mechanics and special relativity seemed to lead inevitably to quantum field theory; but quantum field theory, despite sub-stantial pragmatic success, self-destructed logically due to catastrophic screening.


A picture of particle tracks emerging from the collision of two gold ions at high energy. The resulting fireball and its subsequent expansion recreate, on a small scale and briefly, physical conditions that last occurred during the Big Bang

Video, etc of the Nobel Lecture.


See also An Emptier Emptiness a previous post about another Wilczek article.

Exact Relativistic 'Antigravity' Propulsion

by F. S. Felber
The Schwarzschild solution is used to find the exact relativistic motion of a payload in the gravitational field of a mass moving with constant velocity. At radial approach or recession speeds faster than 3^-1/2 times the speed of light, even a small mass gravitationally repels a payload. At relativistic speeds, a suitable mass can quickly propel a heavy payload from rest nearly to the speed of light with negligible stresses on the payload.


preprint in the physics archive.

I'm clueless but it sounds too cool.

The author's address is:
Physics Division, Starmark, Inc., P. O. Box 270710, San Diego, California 92198
But I couldn't find much about "Starmark" on the web.

Monday, June 06, 2005

Quantum Theory Looks at Time Travel

by Daniel M. Greenberger, Karl Svozil preprint in the physics archive.
We introduce a quantum mechanical model of time travel which includes two figurative beam splitters in order to induce feedback to earlier times. This leads to a unique solution to the paradox where one could kill one's grandfather in that once the future has unfolded, it cannot change the past, and so the past becomes deterministic. On the other hand, looking forwards towards the future is completely probabilistic. This resolves the classical paradox in a philosophically satisfying manner.


By the way, Greenberger is the G in the famous GHZ paper on quantum theory - so he's definitely no amateur.

The paper uses the mathematical formalism of quantum mechanics, but it is very well-written, so it's easy to skip the equations and see what's going on.

As it turns out, the authors feel that time-travel paradoxes are resolved quantum-mechanically as follows: it might be possible to travel into the past, but you still couldn't change any facts about the future.

This of course, is the theme of numerous science fiction works. The first I personally encountered was the Twilight Zone episode "No Time Like the Past" where
Paul Driscoll uses a time machine to try and change three past events: the bombing of Hiroshima, Hitler's rise to power and the sinking of the Lusitania. He fails miserably at all of them, and decides to escape to the past. He picks Homeville, Indiana. After learning from a history book he's brought along that a fire, started by runaway horses, will burn down a school and injure several children. He sees the wagon with the horses, and in trying to convince the owner to unhitch them, he frightens the horses and they start the fire. Driscoll returns to the present, content to leave the past alone.

from Ritterson's Episode Guides: The Twilight Zone

I remember feeling very frustrated watching this as a child!

In Behold the Man Michael Moorcock's 1969 time travel story, the protagonist goes back to see what Jesus was really like. Jesus turns out to be an idiot and Mary is harlot. The protagonist is bitterly disappointed, but his time machine is broken so he's stuck in the past. However he ends up, really despite himself, saying and doing things that give rise to the Legend of Jesus. "Behold the Man", by the way, is Pilate's sarcasm (John 19,5) as he has Jesus paraded before the crowd.


Motherhood takes another beating in the opening pages of An Alien Heat, the first of a fabulously decadent time travel series, also by Michael Moorcock.

There's a very different way of resolving time travel paradoxes spectacularly unfolded in The Man Who Folded Himself by David Gerrold. If you go back in time to strangle your grandfather in his crib, instead of being thwarted (ala the Twilight Zone) or disappearing in a puff of smoke, the result is quite different. You're still there and you have a dead baby in your hands! History now stands as follows: the past is exactly the same as it was until you materialized in your grandfather's nursery. But events now unfold "normally" with you in the room with the dead baby. This is somewhat reminiscent of the "Many Worlds" interpretation of quantum mechanics due to Hugh Everett. A time travel to the past causes history to fork into a new branch, but unfortunately, you the time traveler have no access to the original branch. If you immediately time-traveled "back" to 2005 you'll find that you were never born and that all other influences your grandfather had on his future never happened.

An earlier version of this same approach to time travel paradoxes occurs in Robert Heinlein's "A Door into Summer". As far as I can tell, this is a actually a logically consistent way to avoid all paradoxes. The physics is a different matter.


I loved the time travel machine from George Pal's 1960 film of H.G. Well's The Time Machine

Saturday, June 04, 2005

The General in His Labyrinth by Gabriel Garcia Marquez


I was somewhat disappointed by this book, perhaps partly because I'd read several other books by Marquez, including the stupendous One Hundred Years of Solitude (more than once), so my expectations were very high. I borrowed it from a friend on a whim (Lidia actually, whose apartment is jammed with interesting books), when I realized there was a Marquez I hadn't actually read. It's a historical novel about the last weeks of Simon Bolivar, the Liberator of South America, who was a sick and greatly disappointed man at that point, hence it wasn't exactly upbeat. It reminded me of the sections of One Hundred Years which dealt with a fictional general in that novel - I found those parts rather plodding in an otherwise extremely compelling book. Even so, Marquez writes very, very good last sentences - the last sentence of One Hundred Years is the best last sentence in any book I've ever read. The last sentence of The General was also especially good, it actually gave me goose bumps, even though I didn't particularly care for the first however many thousand sentences which came before. The problem is, you really have to read the whole book before the last sentence has its proper impact - cheating and jumping right to the end probably won't work at all! By the way, my other most memorable last sentence was in Sentimental Education - please feel free to leave a comment with your favorite literary finale.


The Liberator - in better days than those depicted in the novel

Thursday, June 02, 2005

Beautiful Graphics from this week's Nature


The density distribution of matter in a slice of the computational volume of the Millennium Run model, showing large clusters with densities 1,000 times the mean density of the Universe (yellow); a 'cosmic web' of filamentary structures 10 to 100 times denser than the mean (purple); and the mostly empty regions (black), often called voids, which contain less than 10% of the mean density of the Universe. The white square shows the size of the computational volume for a full hydrodynamic simulation that would use up the same computational resources as the Millennium Run. (Figure courtesy of Volker Springel.)

Cosmology: Digitizing the Universe by Nickolay Y. Gnedin in Nature 435, 572-573 (2 June 2005).

For years, cosmologists have been racing each other to develop ever more sophisticated and realistic models of the evolution of the Universe. The competition has just become considerably stiffer.