This is an excerpt of an essay recently published in Aeon magazine as The Cold Fusion Horizon. Aeon's copyeditors did a pretty good job – except on the title! :-( – but they omitted quite a few of the links I'd included in my text. I've reproduced the relevant paragraphs here in their original form, to make my sources clear. There's a link at the end to the full piece in Aeon.
My Dinner with Andrea – Cold Fusion, Sane Science, and the Reputation Trap
Four years ago a physicist friend of mine made a joke on Facebook
about the laws of physics being broken in Italy. He had two pieces
of news in mind. One was a claim by the Gran Sasso-based OPERA
team to have discovered super
luminal neutrinos. The other concerned an engineer from
Bologna called Andrea Rossi, who claimed to have cold
fusion reactor producing commercially useful amounts of
heat.
Why were these claims so improbable? The neutrinos challenged a
fundamental principle of Einstein’s theory of special relativity,
that nothing can travel faster than light. While cold fusion, or
LENR (Low Energy Nuclear Reactions), as it is also called, is the
controversial idea popularised by Martin Fleischmann and Stanley
Pons in 1989, that nuclear reactions similar to those in the sun
could also occur at or close to room temperature, under certain
conditions. Fleischmann and Pons claimed to have found evidence
that such reactions could occur in palladium loaded with deuterium
(an isotope of hydrogen). A few other physicists, including Sergio
Focardi at Bolonga, claimed similar effects with nickel and
ordinary hydrogen. But most were highly sceptical, and the field
“subsequently gained a reputation as pathological science,” as
Wikipedia puts
it. Even the believers had not claimed commercially useful
quantities of excess heat, as Rossi now reported from his "E-Cat"
reactor.
However, it turned out that my physicist friend and I disagreed
about which of these unlikely claims was the less improbable. He
thought the neutrinos, on the grounds that the work had been done
by respectable scientists, rather than by a lone engineer with a
somewhat chequered past. I thought Rossi, on grounds of the
physics. Superluminal neutrinos would overturn a fundamental tenet
of relativity, but all Rossi needed was a previously unnoticed
channel to a reservoir of energy whose existence is not in doubt.
We know that huge amounts of energy are locked up in metastable
nuclear configurations, trapped like water behind a dam. There’s
no known way to get useful access to that energy, at low
temperatures. But – so far as I knew – there was no "watertight" argument that no such method exists.
My friend agreed with me about the physics. (So has every other
physicist I’ve asked about it since.) But he still put more weight
on the sociological factors – reputation, as it were. So we agreed
to bet a dinner on the issue. My friend would pay if Rossi turned
out to have something genuine, and I would pay if the neutrinos
came up trumps. We’d split the bill if, as then seemed highly
likely, both claims turned out to be false.
It soon became clear that I wasn’t going to lose. The neutrinos
were scratched from the race, when it turned
out that someone on OPERA’s team of respectable scientists
had failed to tighten an optical lead correctly.
Rossi, however, has been going from strength to strength. While
it is fair to say that the jury is still out, there has been a lot
of good news (for my hopes of a free dinner) in the past couple of
years. There have been two reports (in 2013 and 2014)
of tests of Rossi’s device by teams of Swedish and Italian
physicists whose scientific credentials are not in doubt, and who
had access to one of his devices for extended periods (a month,
for the second test). Both reports claimed levels of excess heat
far beyond anything explicable in chemical terms, in the testers’
view. (The second report also claimed isotopic shifts in the
composition of the fuel.) Since then there have been several
reports of duplications by experimenters in Russia
and China,
guided by details in the 2014 report.
More recently, Rossi was granted
a US patent for one of his devices, previously refused on
the grounds that insufficient evidence had been provided that the
technique worked as claimed. There are credible
reports that a 1MW version of his device, producing many
times the energy that it consumes, has been on trial in an
industrial plant in Florida for months, with good results so far.
And Rossi’s US backer and licensee, Tom Darden – a
respectable North Carolina-based industrialist, with a long track
record of investment in pollution-reducing industries – has been
increasingly willing to speak
out in support of the LENR technology field. (Another
investor, UK-based Woodford Funds, reports
that it conducted "a rigorous due diligence process that has taken
two and half years.")
Finally, very recently, there’s a paper
by two senior Swedish physicists, Rickard Lundin and Hans Lidgren,
proposing a mechanism for Rossi’s results, inspired in part by the
second of two test reports mentioned above. Lunden and Lidgren say
that the "experimental results by Rossi and co-workers and their
E-Cat reactor provide the best experimental verification of the …
process" they propose.
As I say, I don’t claim that this evidence is conclusive, even
collectively. It’s still conceivable that there is fraud involved,
as many sceptics have claimed; or some large and persistent
measurement error. Yet as David Bailey and Jonathan Borwein point
out here
and here,
these alternatives are becoming increasingly unlikely – which is
great news for my dinner prospects! (Bailey and Borwein have also
interviewed Rossi, here.)
Moreover, Rossi is not the only person claiming commercially
relevant results from LENR. Another prominent example is Robert
Godes, of Brillouin Energy,
profiled in this
recent Norwegian newspaper piece. If you want to dismiss
Rossi on the grounds that he’s claiming something impossible, one
of these explanations needs to work for Godes, too.
You can see why I’ve been salivating at the thought of My Dinner
With Andrea, as I’ve been calling it (h/t Louis
Malle), in honour of the man who will be the absent guest of
honour, if my physicist friend is paying. And it is not only my
stomach that has been becoming increasingly engaged with this
fascinating story. I’m a philosopher of science, and my brain has
been finding it engrossing, too.
What do I mean? Well, it hasn’t escaped my attention that there’s
a lot more than a free dinner at stake. Imagine that someone had a
working hot fusion reactor in Florida – assembled, as Rossi’s 1MW
device is reported to be, in a couple of shipping containers, and
producing several hundred kilowatts of excess power, month after
month, in apparent safety. That would be huge news, obviously. (As
several people have noticed, a new clean source of energy would be
really,
really useful, right about now!)
But if the potential news is this big, why haven’t most of you
heard about Rossi, or Godes, or any of the other people who have
been working in the area (for many years, in some cases)? This is
where things get interesting, from a philosopher of science’s
point of view.
As a question about sociology, the answer is obvious. Cold fusion
is dismissed as pseudoscience, the kind of thing that respectable
scientists and science journalists simply don’t talk about (unless
to remind us of its disgrace). As a recent Fortune piece
puts it, the Fleischmann and Pons "experiment was eventually
debunked and since then the term cold fusion has become almost
synonymous with scientific chicanery." In this case, the author of
the article is blithely reproducing the orthodox view, even in the
lead-in to his interview with Tom Darden – who tells him a
completely different story (and has certainly put his money where
his mouth is).
Ever since 1989, in fact, the whole subject has been largely
off-limits, in mainstream scientific circles and the scientific
media. Authors who do put their head above the parapet are ignored
or rebuked. Most recently, Lunden and Lidgren report
that they submitted their paper to the journal Plasma Physics
and Controlled Fusion, but that the editors declined to
have it reviewed; and that even the non-reviewed preprint archive,
arxiv.org, refused to accept it.
So, as a matter of sociology, it is easy to see why Rossi gets
little serious attention; why an interview with Tom Darden
associates him with scientific chicanery; and why, I hope, some of
you are having doubts about me, for writing about the subject in a
way that indicates that I am prepared to consider it seriously.
(If so, hold that attitude. I want to explain why I take it to
reflect a pathology in our present version of the scientific
method. My task will be easier if you are still suffering from the
symptoms.)
Sociology is one thing, but rational explanation another. It is
very hard to extract from this history any satisfactory justification
for ignoring recent work on LENR. After all, the standard line is
that the rejection of cold fusion in 1989 turned on the failure to
replicate the claims of Fleischmann and Pons. Yet if that were the
real reason, then the rejection would have to be provisional.
Failure to replicate couldn’t possibly be more than provisional –
empirical science is a fallible business, as any good scientist
would acknowledge. In that case, well done results claiming to
overturn the failure to replicate would certainly be of great
interest.
Perhaps the failure to replicate wasn’t crucial after all?
Perhaps we knew on theoretical grounds alone that cold fusion was
impossible? But this would make nonsense of the fuss made at the
time and since, about the failure to reproduce the Fleischmann and
Pons results. And in any case, it is simply not true. As I said at
the beginning, what physicists actually say (in my experience) is
that although LENR is highly unlikely, we cannot say that it is
impossible. We know that the energy is in there, after all.
No doubt one could find some physicists who would claim it was
impossible. But they might like to recall the case of Lord
Rutherford, greatest nuclear physicist of his day, who famously
claimed that "anyone who expects a source of power from
transformation of … atoms is talking moonshine" – the very day
before Leo Szilard, prompted by newspaper reports of Rutherford’s
remarks, figured
out the principles of the chain reaction that makes nuclear
fission useable as an energy source, peaceful or otherwise.
This is not to deny that there is truth in the principle
popularised by Carl Sagan, that extraordinary claims require
extraordinary evidence. We should certainly be very cautious about
such surprising claims, unless and until we amass a great deal of
evidence. But this is not a good reason for ignoring such evidence
in the first place, or refusing to contemplate the possibility
that it might exist. (As Robert Godes said
recently: "It is sad that such people say that science
should be driven by data and results, but at the same time refuse
to look at the actual results.")
Again, there’s a sociological explanation why few people are
willing to look at the evidence. They put their reputations at
risk by doing so. Cold fusion is tainted, and the taint is
contagious – anyone seen to take it seriously risks contamination
themselves. So the subject is stuck in a place that is largely
inaccessible to reason – a reputation trap, we might
call it. People outside the trap won’t go near it, for fear of
falling in. "If there is something scientists fear it is to become
like pariahs," as Rickard Lundin puts
it. People inside the trap are already regarded as
disreputable, an attitude that trumps any efforts they might make
to argue their way out, by reason and evidence.
...
Read the rest of this essay at Aeon.
Medium of Expression
Friday 25 December 2015
Wednesday 25 February 2015
Peter Menzies (1953–2015)
[Preprint of an obituary to appear in the Australasian Journal of Philosophy, June 2015]
Peter Charles Menzies died at home in Sydney on 6 February 2015, the day after his sixty-second birthday, at the sad conclusion of a seven-year disagreement with cancer. No one who knew him will be surprised to learn that he conducted this long last engagement with the same strength of mind, clarity, and good-natured equanimity for which he was known and loved by friends, students and colleagues, over the three decades of his professional life. He continued working throughout his illness, teaching and supervising at Macquarie University until his retirement in 2013, and writing and collaborating until his final weeks. He will be remembered by the Australasian philosophical community as one of its most lucid and generous voices, and by philosophers worldwide as one of the most astute metaphysicians of his generation.
Menzies was born in Brisbane, and spent his childhood there
and in Adelaide. His family moved to Canberra in 1966, where he attended
Canberra Grammar School. He studied Philosophy at ANU, graduating with the
University Medal in 1975. He went on to an MPhil at St Andrews, writing on
Michael Dummett's views on Realism under the supervision of Stephen Read; and
then to a PhD at Stanford, where he worked with Nancy Cartwright on Newcomb
Problems and Causal Decision Theory. His Stanford experience was evidently
formative, not merely in setting the course of much of his future work, but in
establishing a fund of anecdotes that would long enrich the Coombs tearoom and
other Australian philosophy venues. There is a generation of Australian-trained
metaphysicians who know little about Michel Foucault, except that he had the
good fortune to be taken out for pizza in Palo Alto by a young Peter Menzies,
following a talk at Stanford. (Peter would add how delighted he was to discover
that Foucault preferred pizza to something expensive and French.)
Returning to Australia in 1983, Menzies held a Tutorship at
the Department of Traditional & Modern Philosophy, University of Sydney,
from 1984 to 1986. He was then awarded an ARC Research Fellowship, held
initially at the University of Sydney and then at ANU, where he won a Research
Fellowship in the Philosophy Program, RSSS. He remained at ANU until 1995, when
he took up a Lectureship at Macquarie University. He was promoted to a Personal
Chair at Macquarie in 2005, becoming an Emeritus Professor following his
retirement in 2013. He was elected a Fellow of the Australian Academy of
Humanities in 2007, and was President of the Australasian Association of
Philosophy in 2008–2009.
Peter Menzies with Arnie Koslow, Cambridge 1992 – Photograph by Hugh Mellor. |
The central focus of Menzies’ philosophical work, throughout much of his career, was the study of causation – both causation in itself, and causation in its relevance to other philosophical topics, such as physicalism, levels of explanation, and free will. From the beginning, he had a particular knack for putting his finger on difficulties in other philosophers’ positions, and for explaining with great clarity what the problem was. With this combination of talents, he was soon making a difference. At the beginning of David Lewis’s famous paper ‘Humean Supervenience Debugged’ (Mind, 1994), Lewis singles out "especially the problem presented in Menzies (1989)" as the source of, as he puts it, "the unfinished business with causation". The reference is to Menzies’ ‘Probabilistic Causation and Causal Processes: A Critique of Lewis’ (Philosophy of Science, 1989), and other early papers had a similar impact.
Most
would agree that the business with causation remains unfinished, twenty years
later, but that the field is greatly indebted to Menzies for much of the
progress that has been made in the past three decades. As a philosopher who
argued that we should understand causation in terms of the notion of making a
difference, he certainly practised what he preached, within his own arena.
Fair-minded
to a fault, Menzies was just as adept at putting his finger on what he saw as
failings in his own work, and often returned with new insights to previously
worked ground. His much-cited piece 'Probabilistic Causation and the
Pre-emption Problem' (Mind, 1996) is such an example. Later
classics include his ‘Difference-Making in Context’ (in Collins, et al, eds, Counterfactuals
and Causation, MIT Press, 2004), and ‘Non-Reductive Physicalism and the Limits of the
Exclusion Problem’ (JPhil, 2009), a piece co-authored with Christian List.
List
is Menzies’ most recent collaborator and co-author, but several other
philosophers, including myself, had earlier had this good fortune. In my case
it happened twice, the first and better-known result being our paper ‘Causation
as a Secondary Quality’ (BJPS, 1993), a piece actually written in the late 1980s,
and first delivered in Philosophy Room at the University of Sydney at the 1990
AAP Conference. (I can’t recall how we divided up the delivery, but we
certainly fielded questions jointly, and I remember complaining to Peter
afterwards that he’d missed an obvious Dorothy-Dixer from a young David
Braddon-Mitchell.) Whatever its qualities, or lack of them, the paper proved a
stayer, and is for each of us our most-cited article, by a very wide margin.
As
one of Menzies’ collaborators, it is easy to understand why he was such a
successful teacher and supervisor, held in such grateful regard by generations
of students. He combined patience, equanimity, generosity, and unfailing
good-humour, with insight, exceptional clarity, and an almost encyclopaedic
acquaintance with relevant parts of the literature. In effect, he made it
impossible for his grateful students – and collaborators! – not to learn, and to enjoy
the process. Many of his PhD students from ANU and Macquarie, such as Mark
Colyvan, Daniel Nolan, Stuart Brock, Cathy Legg, Mark Walker, Joe Mintoff, Nick
Agar, Kai Yee Wong, and Lise Marie Andersen, have now gone on to distinguished
careers in Australasia and elsewhere. All remember him with fondness and
gratitude. As Lise Marie Andersen, one of his last PhD students, puts it: “As a
supervisor Peter was patient, warm and extremely generous with his time and
knowledge. As a philosopher he was an inspiration.”
Menzies is survived by his daughter Alice and son Edward
(Woody) from his former marriage to Edwina Menzies, and by Alice’s three sons,
Joseph, Nicolas and Eli; by his partner Catriona Mackenzie, step-sons Matt and
Stefan, and a step-granddaughter, Olivia, born a few weeks before his death;
and by his brother Andrew and sister Susan. By his friends, students, and
colleagues, as by his family, he will be very sadly missed.
Friday 30 August 2013
Rebirthing Pains
[From Science, 30 August 2013; full details and link to published version]
Lee Smolin likes big targets. His last book, The Trouble With Physics, took on the string theorists who dominate so much of contemporary theoretical physics. It was my engrossing in-flight reading on a trip to the Perimeter Institute a few years ago, where I first met its rather engaging author in person. I thoroughly enjoyed that battle, from my distant philosophical vantage point – “Pleasant is it to behold great encounters of warfare arrayed over the plains, with no part of yours in the peril,” as Lucretius put it (1). But now things are more serious: in Time Reborn Smolin has my team in his sights, and some part of mine is certainly in the peril, if he emerges victorious. Should I now be feeling sorry for the string theorists?
Thus the block picture is simply a view of time from no particular time, just as a map depicts a spatial region from no particular place. (In both cases, we can add a red dot to mark our own position, but the map is not incomplete without one.) It no more makes time unreal than maps make space unreal.
1. Lucretius, De Rerum Natura, W. H. D. Rouse, Trans. (Harvard Univ. Press, Cambridge, MA, 1975).
2. D. C. Williams. 'The myth of passage'. J. Phil. 48 457–472 (July 1951).
3. N. David Mermin. 'Confusing Ontic and Epistemic Causes Trouble in Classical Physics Too'. PIRSA:09090077 (Perimeter Institute, Waterloo, 2009).
4. T. Williamson. 'Must do better'. In Truth and Realism, P. Greenough, M. P. Lynch, Eds. (Oxford Univ. Press, Oxford, 2006), pp. 177–187.
Lee Smolin likes big targets. His last book, The Trouble With Physics, took on the string theorists who dominate so much of contemporary theoretical physics. It was my engrossing in-flight reading on a trip to the Perimeter Institute a few years ago, where I first met its rather engaging author in person. I thoroughly enjoyed that battle, from my distant philosophical vantage point – “Pleasant is it to behold great encounters of warfare arrayed over the plains, with no part of yours in the peril,” as Lucretius put it (1). But now things are more serious: in Time Reborn Smolin has my team in his sights, and some part of mine is certainly in the peril, if he emerges victorious. Should I now be feeling sorry for the string theorists?
I’ll come back to that question, but first to the dispute itself, which
is one of philosophy’s oldest feuds. One team thinks of time as we seem to
experience it, a locus of flow and change, centered on the present moment –
“All is flux”, as Heraclitus put it, around 500BC. The other team, my clan, are
loyal instead to Heraclitus’s near contemporary, Parmenides of Elea. We think
of time as it is described in history: simply a series or “block” of events,
lined up in a particular order, with no distinguished present moment. For us,
“now” is like “here” – it marks where we ourselves happen to stand, but has no
significance at all, from the universe’s point of view.
Which side is right? Both teams have supporters in contemporary philosophy, but we Parmenideans claim powerful allies in modern physics, commonly held by physicists themselves to favour the block picture. Einstein is
often quoted as one of our champions. In a letter to the bereaved family of his
friend Michele Besso, Einstein offered the consoling thought that past, present
and future are all equally real only from our human perspective does the past
seem lost: “We physicists know that the distinction between past, present and
future is only an illusion, albeit a persistent one,” he wrote. (For this, Karl
Popper called him “the Parmenides of modern physics”. Smolin, too, quotes this
letter, though he also claims evidence for a more Heraclitan Einstein, in some
remarks reported by the philosopher Rudolf Carnap.)
On the other side, some Heraclitans are so sure of their ground that
they insist that if Einstein is right – if the distinction between past,
present and future is not objective – then time itself is an illusion.
Accordingly, they interpret the block view as the claim that time is unreal. In
a celebrated paper from 1951, the Harvard philosopher D. C. Williams (following
Wyndham Lewis) called these folk the “time snobs”: “They plume themselves that
. . . they alone are ‘taking time seriously’,” as Williams puts it (2).
To Parmenideans such as Williams and myself, this attitude is just linguistic imperialism, cheeky and rather uncharitable. Of course we believe that
time is real, we insist. (It is as real as space is, the two being simply
different aspects of the same four dimensional manifold.) What we deny is just
that time comes carved up into past, present and future, from Nature’s point of
view. On the contrary, we say, “now” is just like “here”: it all depends where
you stand.
I’ve mentioned this because Smolin is a classic time snob, in Williams’
terms. When he says that he is defending the unpopular view that time is real,
he means time in the time snobs’ proprietary sense. This makes him sound like a
defender of common sense – “Of course time is real, what could all those clever
folk have been thinking?”, the reader is being invited to think – whereas in
fact the boot is on the other foot. It is Smolin’s view that trips over common
and scientific sense, denying the reality of what both kinds of sense take for
granted.
To explain why, and to bring these issues down to earth, suppose that I
ask you to tell me what you did last week. You give me some of the details,
telling me a little about what happened to you, what you did, on each of those
seven days. I can press you for more details – the conversation might go on for
a long time! – but can I complain that you haven’t told me which day (or
minute, or second) last week was the present moment? Obviously not: each moment
was present when it happened, but they all have this feature in common – no
single moment is distinguished in any way from all the others. Have I denied
that there was time last week? Again, obviously not. Like any other week, after
all, it contained 168 hours of the stuff! And the week wasn’t static – events
happened, things changed.
So we can make perfectly good sense of time without picking out one
moment as the present moment. We do it all the time, when we think about other
times. And there’s nothing more radical to the block universe view than the
idea that science should describe reality in the way that we just imagined
describing last week, leaving out altogether the idea of a present moment, and
hence a division between past and future. For the universe, as for last week,
this is perfectly compatible with thinking that time is real, that events
happen, and all the rest of it.
Again, D. C. Williams was beautifully clear about this, in his challenge
to the time snobs. Here he is, pointing out that nothing that matters is
missing from the block view:
Let us hug to us as closely as we like that there is real succession, that rivers flow and winds blow, that things burn and burst, that men strive and guess and die. All this is the concrete stuff of the manifold, the reality of serial happening, one event after another, in exactly the time spread which we have been at pains to diagram. What does the theory allege except what we find, and what do we find that is not accepted and asserted by the theory?
Thus the block picture is simply a view of time from no particular time, just as a map depicts a spatial region from no particular place. (In both cases, we can add a red dot to mark our own position, but the map is not incomplete without one.) It no more makes time unreal than maps make space unreal.
To deny this, to squeeze the obvious fact that we can talk about other
times (like last week) into their peculiar present-centered cosmology, Heraclitans need to resort to desperate measures. They need to insist that when we
seem to be talking about last week, or Einstein, or the history of the universe
since the Big Bang, we are really talking about something in the present moment
– our present evidence, perhaps.
Some brave Heraclitans are prepared to go this far, and Smolin may be
inclined to follow them. As he puts it at one point, his view is that “the past
was real but is no longer real. We can, however, interpret and analyze the
past, because we find evidence of past processes in the present.” But this is a
little slippery. What are we interpreting and analysing, exactly, and what is all this evidence supposed to be evidence about? Like other Heraclitans, Smolin
faces a dilemma. If history – the study of the past – can be taken at face
value, as attempting to describe something other then the present moment, then
it’s a model for the block view of the universe, in the way I have indicated
(and shows how misleading it is to say that such a view is timeless). If not,
then the proposal is much more radically revisionary, much more out of line
with common sense, than Smolin seems to appreciate.
True, Smolin would not be alone in going to this extreme. At the same
meeting at the Perimeter Institute where I first met Smolin himself, I heard a
distinguished quantum theorist insist that the past is just a model we invent
to make sense of present evidence, and not to be taken literally (3). As I
remarked at the time, this kind of attitude soon gets out of hand – I said that
I felt myself tempted to conclude that the distinguished speaker, too, along
with the rest of the audience, were just aspects of a model I had constructed
to make sense of my own evidence (no more to be taken literally than talk of
what we all did yesterday). The time snobs’ chauvinism of the present moment
slides easily into solipsism.
So I have grave doubts whether Smolin’s version of the Heraclitan view
is any more plausible than its predecessors. And yet it has a tragic quality
that other versions of the view mostly lack, for Smolin regards it as the
gateway to promising new physics. I haven’t said anything about these interesting
ideas, which I’m not so well qualified to assess, but the tragedy lies in the
fact that they seem entirely compatible with the block universe view, properly understood. The main idea is that the laws of physics can change from
one time to another, and – leaving aside any special issues about the
mutability of laws – we saw that the block universe can accommodate change as
well as its rival. “Rivers flow, winds blow, . . . men strive and guess and
die,” as Williams puts it. Laws change, too, perhaps, if Smolin is right, but
this too would then be part of “the concrete stuff of the manifold, the reality
of serial happening, one event after another.” If so, then Smolin’s campaign
against the Parmenideans is entirely unnecessary, a great waste of energy and
brain hours!
I think Smolin would reply that it is essential to his proposal that the
way in which laws evolve not be deterministic, that there be genuine novelty,
and that the block view doesn’t allow this. But although many people assume
that the block picture is necessarily deterministic, this is simply a mistake
(“albeit a persistent one”, as Einstein might have put it). There is no more
reason why the block view should require that the contents of one time be
predictable from a knowledge of another, than it is somehow a constraint on a
map of the world that the topography at the Tropics be predictable from that at
the Equator. I can think that there’s a fact about what I had for lunch last
Tuesday – or will have for lunch next Tuesday – without thinking that there’s
any decisive present evidence about these matters, such that Laplace’s demon
could figure it out, based on what he could know now.
So Smolin is fighting an enemy with whom he has no real reason to disagree, in my view. He is opposing a conception of time that if not actually on
his side, is certainly not in his way. And in waging this wholly unnecessary
battle he is aligning himself with a team whose own weaknesses – and
misunderstandings of their rival – have been apparent for many decades. If I’m
right, then he’s shooting himself in the foot, big time.
Coming back then to the question at the beginning, how do I feel? Do my parts feel in peril in Time Reborn, and should I now be feeling sorry for the string theorists? I wish I could say “yes”, but my main sense isn’t peril, but sadness: sadness for the reason just mentioned, the tragic unnecessity of Smolin’s campaign, but also sadness (and some embarrassment) that my home side – I mean Team Philosophy, now – has not succeeded in making these things clear. (“Must do better”, as a distinguished Oxford colleague put it recently (4).)
Coming back then to the question at the beginning, how do I feel? Do my parts feel in peril in Time Reborn, and should I now be feeling sorry for the string theorists? I wish I could say “yes”, but my main sense isn’t peril, but sadness: sadness for the reason just mentioned, the tragic unnecessity of Smolin’s campaign, but also sadness (and some embarrassment) that my home side – I mean Team Philosophy, now – has not succeeded in making these things clear. (“Must do better”, as a distinguished Oxford colleague put it recently (4).)
References
1. Lucretius, De Rerum Natura, W. H. D. Rouse, Trans. (Harvard Univ. Press, Cambridge, MA, 1975).
2. D. C. Williams. 'The myth of passage'. J. Phil. 48 457–472 (July 1951).
3. N. David Mermin. 'Confusing Ontic and Epistemic Causes Trouble in Classical Physics Too'. PIRSA:09090077 (Perimeter Institute, Waterloo, 2009).
4. T. Williamson. 'Must do better'. In Truth and Realism, P. Greenough, M. P. Lynch, Eds. (Oxford Univ. Press, Oxford, 2006), pp. 177–187.
Note
This is the author’s version of this work. It is posted here by permission of the AAAS for personal use, not for redistribution. The definitive version was published in Science, Vol. 341 no. 6149 (30 August 2013), pp. 960-961, DOI: 10.1126/science.1239717, and is accessible via these links: full text; reprint.
For some earlier thoughts on these topics, see this piece, written with my former Sydney colleague, Jenann Ismael.
For some earlier thoughts on these topics, see this piece, written with my former Sydney colleague, Jenann Ismael.
Friday 23 August 2013
Cambridge, Cabs and Copenhagen
This piece on the NYT Opinionator site explains how I came to working with Jaan Tallinn and Martin Rees, to establish the Centre for the Study of Existential Risk (CSER.ORG).
Royal consent?
What do Britain, Sweden, Norway, Denmark, the Netherlands, Belgium, Spain and Luxembourg have in common? All of them select their future heads of state at birth, thus denying to a few individuals a simple freedom (to choose one's own life) that they and comparable countries have long taken for granted for everyone else. This injustice hides in plain sight – equally invisible, apparently, both to opponents and supporters of these hereditary monarchies.
I've written about this issue in two recent pieces (here and here) in The Conversation, and also in the piece reproduced below, from the Cambridge Faculty of Philosophy's 2013 Newsletter.
I've written about this issue in two recent pieces (here and here) in The Conversation, and also in the piece reproduced below, from the Cambridge Faculty of Philosophy's 2013 Newsletter.
‘Erroneously supposed to do no harm’
Bertrand Russell’s celebrated lecture ‘On the Notion of Cause’ was first delivered on 4 November 1912, as Russell’s Presidential Address to the Aristotelian Society. It gave Russell a place beside Hume as one of the great causal sceptics, and twentieth century philosophy one of its most famous lines: “The law of causality”, Russell declares, “Like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.”
On 1 November 2012, taking advantage of a happy accident of timing, I used my Inaugural Lecture as Bertrand Russell Professor to mark the centenary of ‘On the Notion of Cause’, and to ask what its conclusions look like with the benefits of a century’s hindsight. As I explained, the story has many Cambridge connections. Indeed, much of what Russell set out to achieve was given proper if sadly sketchy foundations in one of Frank Ramsey’s late papers from 1929, just four months before his untimely death. (It has taken the rest of us most of a century to catch up.)
Preparing my lecture, I wondered what Russell had had in mind in the other part of his famous line. Just what, in his view, was the harm that the monarchy is erroneously thought not to do? I assumed this would be an easy curiosity to satisfy – somewhere, the prolific Russell would have written about the monarchy at greater length. But I searched in vain.
Eventually I wrote to Nicholas Griffin, of the Russell Archives at McMaster. He told me that there was nothing to find, not even in Russell’s correspondence, so far as he knew it. But he suggested a context for Russell’s remark. In Britain had concluded a constitutional crisis, bought on by the Liberal government’s de- termination to remove the veto power of the House of Lords. A crucial step was the King’s indication that he would support the government, if necessary, by creating sufficient new Liberal peers to ensure passage of the Bill through the Lords. (Russell would have been one of those new peers, in that counterfactual world.) Professor Griffin suggested that in the light of the King’s support, some on the Liberal side were saying that the monarchy wasn’t so bad after all; and that Russell may have been taking the opportunity to indicate that he was made of sterner stuff – that the old battle lines of the Russells remained unchanged.
But this doesn’t tell us what Russell thought the harm in question actually was, at that point in the nation’s history – when, thanks in part to Russell’s own ancestors, it had long been a “crowned republic”, as Tennyson put it (a fact reaffirmed in the recent crisis). So, as my centenary footnote to Russell’s great paper, I offered my own proposal. In my view, there is a significant harm associated with modern constitutional monarchies (of which there are nine or ten in all, most of them in Western Europe) – a consequence remarkable for the fact that although in plain sight, it goes unmentioned, and apparently almost unnoticed. It is indeed ‘a relic of a bygone age’, as Russell puts it, whose cost is hidden from us by the sheer familiarity of the system of which it is a consequence – by the fact that a traditional picture holds us in its grip, as Wittgenstein might have put it. Moreover, while I don’t suggest that this is what Russell actually had in mind, it is something that he in particular would have had reason to have in mind – it resonates in several ways with aspects of his own life. In all senses, then, it is an excellent fit.
The point in question is so simple that it is apt to seem banal. In selecting children on an hereditary basis for public office, we deny them a freedom we take for granted for our own children, to decide for themselves what they want to make of their lives. To see the issue in perspective, imagine such a system being proposed in some contemporary democracy, starting from scratch. In future, various public offices would be filled by selecting infants who would be brought up to fill the roles in question. (A knock at the door might signal that your child had been chosen to be a future Archbishop of Canterbury, say.) The main objection would not be that it was undemocratic, but that it was absurdly unfair to the individuals concerned. The fact that we do find this system acceptable in practice, for one particular public office, turns mainly on its sheer familiarity – that’s just how things are done. Perhaps also, as Russell thinks in the case of causation, we are in the grip of bad metaphysics: we think of royalty as a natural kind, and hence imagine that it is a natural matter that royal children should be brought up to play these roles – that’s the kind of beings they are, as it were. The picture holds us captive, and central to it is the fantasy that what these families enjoy is a matter of entitlement and privilege, not constraint and obligation.
It is easy to see how we got to this point, from the distant past this picture actually depicts: on the one hand, a great erosion of opportunity on the side of royalty, as – thanks in part to Russell’s ancestors, in the British case – its powers were curtailed; on the other, an even greater expansion of opportunity on the side of ordinary people, especially children, as we came to accept that young people should make their life choices for themselves. The combination means that the heirs to modern democratic monarchies are now marooned on little islands of under-privilege, impoverished not only compared to their own ancestors, but also, more importantly, compared to the standards that now exist in the community at large.
This may seem an exaggeration. Couldn’t an heir simply abdicate, if she didn’t want to rule? Well, yes, but certainly not simply! It would be a difficult, public and personally costly process. She would be disappointing a nation’s expectations, impressed on her throughout a childhood in which she had been taught that this is her duty, her place in life. (There’s the small matter of putting a sibling in the hot seat, too.) Why should her freedom require her to scale such a formidable fence, when our children come and go as they please?
This was my proposal concerning the monarchy’s hidden harm, and it is easy to see why I took it to be Russellian in spirit. Russell felt the constraints of his own childhood very deeply, and was greatly relieved to escape them when he came of age. Later, when he himself became a father, he was an advocate of allowing children as much freedom as possible. Famously, too, he was an opponent of conscription. He also had a talent for calling our attention to those uncomfortable truths that hide themselves in plain sight. I think he would have felt it entirely appropriate to call attention to this one.
Thursday 15 August 2013
Dispelling the Quantum Spooks – a Clue that Einstein Missed?
[This is a joint piece with Ken Wharton. An abstract and a PDF version are available here. This version is my first experiment with Blogger – apologies for a few glitches in the HTML.]
How can it be that the laws of fundamental physics are time-symmetric? After all, as we already noted, many ordinary processes seem to have a clear temporal preference – like cream mixing in coffee, they happen one way but not in reverse. But by the end of the nineteenth century, most physicists had come to the view that these cases are not fundamental. They reflect the statistical behaviour of huge numbers of microscopic constituents of ordinary stuff, and the fact that for some reason we still don’t fully understand, stuff was much more organized – had very low entropy, as physicists say – some time in the past.6 But this explanation seems fully compatible with time-symmetry at the fundamental level.
Does time-symmetry mean that the future controls the past just as much as the past controls the future? That can’t be true in general, because CEM is perfectly time-symmetric, and yet it is easy to see that we can use the apparatus in Figure 3 to send signals (and hence, if we wish, control things) from past to future – i.e., left to right – but not in the other direction. If we feed in a classical light beam on path L1, then by controlling the angle σL we can control τ, and hence encode a signal in the light beam. By measuring τ on the right hand side, we can then decode the message, or control some piece of machinery. But we can’t do that from right to left, of course, even though the equations of CEM are perfectly time-symmetric. So where does the difference come from?
So discussions of the meaning of quantum theory might have been very different, if Einstein had noticed how quantization suggests retrocausality. But would it have made any difference in the end? Is there any practical way to rescue Einstein’s dream of a spook-free quantum world, if we do allow it to be retrocausal?
[1] Einstein, A., Letter to Max Born, 3 March 1947, published in Max Born, ed., The Born-Einstein Letters: Friendship, Politics and Physics in Uncertain Times (Macmillan, 1971), p. 178.
1 Einstein’s dream
Late in his life, Einstein told Max Born that he couldn’t take quantum
mechanics seriously, “because it cannot be reconciled with the idea that
physics should represent a reality in time and space, free from spooky
actions at a distance.”[1] Most physicists think of this as the sad lament of
an old giant, beached on the wrong side of history – unable to accept
the revolution in physics that he himself had done so much to foment,
decades before, with his 1905 discovery that light is absorbed in discrete
“quanta”.
According to this orthodox view, John Bell put the final nail in the coffin of
the dream of a spook-free successor to quantum theory, a decade after Einstein’s
death. Bell’s Theorem seemed to show that any theory of the kind Einstein
hoped for – an addition to quantum theory, describing an underlying “reality in
space and time” – is bound to involve the kind of “action at a distance”, or
“nonlocality”, to which Einstein was objecting.
So Einstein’s spooks have long seemed well-entrenched in the quantum world
– and this despite the fact that they still seem to threaten Einstein’s other great
discovery from 1905, special relativity. As David Albert and Rivka Galchen put it
in a recent piece in Scientific American, writing about the intuition of “locality”:
“Quantum mechanics has upended many an intuition, but none deeper than
this one. And this particular upending carries with it a threat, as yet
unresolved, to special relativity—a foundation stone of our 21st-century
physics.”[2]
But could this accepted wisdom be due for a shake-up? Could
Einstein have the last laugh after all? Intriguingly, it turns out there’s a
new reason for taking seriously a little-explored loophole in Bell’s
Theorem.1
Even more intriguingly, it’s a reason that Einstein himself could have spotted as
early as 1905, since it is a simple consequence of the quantization of light,
together with another assumption that he certainly accepted at that
time.
The loophole stems from the fact that Bell’s argument assumes that our
measurement choices cannot influence the past behaviour of the systems we
choose to measure. This may seem quite uncontroversial. In the familiar world of
our experience, after all, causation doesn’t work “backwards”. But a few
physicists have challenged this assumption, proposing theories in which causation
can run backwards in the quantum world. This idea – “retrocausality”,
as it is often called – has been enjoying a small renaissance. (See Box
1.)
Until very recently,[3] however, no one seems to have noticed that there is a
simple argument that could have put retrocausality at centre-stage well
before the development of quantum theory. As we explain below, the
argument shows that retrocausality follows directly from the quantization of
light, so long as fundamental physics is time-symmetric (meaning that
any physical process allowed in one time direction is equally allowed in
reverse).
Many ordinary physical processes, such as cream mixing in coffee, don’t
appear to be time-symmetric. (The cream mixes in, but never mixes out!) But
these are processes involving large numbers of microscopic constituents, and
the constituents seem to behave with complete time-symmetry. Cream
“unmixing” out of coffee is physically possible, just very improbable.
Einstein himself defended the view that “irreversibility depends exclusively
upon reasons of probability”,[4] and is not evidence of a fundamental
time-asymmetry.
Box 1: The retrocausal mini-renaissance.
Recent interest in retrocausality in QM comes from several directions:
- Supporters of Yakir Aharonov’s Time Symmetric Quantum Mechanics (see popular account here: bit.ly/tsqm2010) have claimed that because this theory is retrocausal, it provides a superior account of the results of so-called “weak measurements”.
- Some authors have suggested that there is evidence of retrocausality in recent versions (e.g. [7]) of John Wheeler’s Delayed Choice Experiment.
- The recent Pusey-Barrett-Rudolph (PBR) Theorem [8] puts tough new restrictions on interpretations of QM that regard the wave function, as Einstein preferred, merely as a representation of our knowledge of a physical system. Commentators have noted that retrocausality may provide the most appealing escape hatch – see Matt Leifer’s summary: tinyurl.com/LeiferPBR.
- Recent work by Ruth Kastner [tinyurl.com/SciAmTI] has revived interest in Cramer’s Transactional Interpretation, an early retrocausal proposal inspired by work by Wheeler and Feynman.
As we shall explain, the new argument shows that quantization makes a
crucial difference. Time-symmetry alone doesn’t guarantee that causation ever
works backwards, but quantization gives us a new kind of influence, which –
assuming time-symmetry – must work in both temporal directions. This new kind
of influence is so subtle that it can evade spooky nonlocality, without giving us
an even more spooky ability to send signals into the past. One of the striking
things about the apparent action at a distance in quantum mechanics (QM) is
that it, too, is subtle in just this way: there’s no way to use it to build a “Bell
Telephone”, allowing superluminal communications. The argument hints how this
subtlety might arise, as a consequence of quantization, from an underlying
reality that smoothly links everything together, via pathways permitted by
relativity.
What would Einstein have thought, had he noticed this argument? We can
only speculate, but one thing is clear. If physics had already noticed that
quantization implies retrocausality (given fundamental time-symmetry,
which was widely accepted), then it wouldn’t have been possible for later
physicists simply to ignore this option, as often happens today. It is true
that the later development of quantum theory provides a way to avoid
retrocausality, without violating time-asymmetry. But as we’ll see, this requires
the very bit of quantum theory that Einstein disliked the most – the
strange, nonlocal wave function, that Einstein told Born he couldn’t take
seriously.
By Einstein’s lights, then, the new argument seems important. If it had been
on the table in the early years of quantum theory, the famous debates in the
1920s and 1930s about the meaning of the new theory couldn’t possibly have
ignored the hypothesis that a quantum world of the kind Einstein hoped for
would need to be retrocausal.
Similarly, the significance of Bell’s work in the 1960s would
have seemed quite different. Bell’s work shows that if there’s no
retrocausality, then QM is nonlocal, in apparent tension with special
relativity.2
Bell knew of the retrocausal loophole, of course, but was disinclined to explore it.
(He once said that when he tried to think about backward causation he “lapsed
quickly into fatalism”.[6]) But if retrocausality had been on the table for half
a century, it would presumably have been natural for Bell and others
to take this option more seriously. Indeed, Bell’s Theorem itself might
then have been interpreted as a second strong clue that the quantum
world is retrocausal – the first clue (the one that Einstein missed) being
the argument that the quantization of light implies retrocausality, given
time-symmetry.
In this counterfactual history, physicists in the 1960s – two generations ago –
would already have known a remarkable fact about light quantization
and special relativity, Einstein’s two greatest discoveries from 1905.
They would have known that both seem to point in the direction of
retrocausality.3
What they would have made of this remarkable fact, we cannot yet say, but we
think it’s time we tried to find out.
2 Polarization, From Classical to Quantum
The new argument emerges most simply from thinking about a property of light
called “polarization”. (It can also be formulated using other kinds of properties,
such as spin, but the polarization case is the one that Einstein could have used in
1905.) So let’s start with the basics of polarization.
Imagine a wave travelling down a string. Looking down the string’s axis you
would see the string moving in some perpendicular direction; horizontal,
vertical, or at some other angle. This angle is strongly analogous to the
“polarization angle” of an electromagnetic wave in Classical Electromagnetism
(CEM). For a linearly polarized CEM wave, the electric field is oscillating
at some polarization angle, perpendicular to the direction the wave is
travelling.
In a pair of polarized sunglasses are filters that allow light with one
polarization angle to pass through (vertical, 0∘), while absorbing the other
component (horizontal, 90∘). For light with some other polarization angle (say,
45∘), such a polarizing filter naturally divides the electric field into horizontal and
vertical components, absorbing the former and passing the latter. (The exact two
components depend on the angle of the filter, which you can confirm by watching
the blue sky gradually change color as you tip your head while wearing polarized
sunglasses.)
But there are also optical devices that reflect one polarization component
while passing the other; an example of such a “polarizing cube” is depicted in
Figure 1. This cube has a controllable angle σR that splits the incoming light
into two components. (The portion polarized at angle σR passes into
the R1 path, and the portion polarized at σR + 90∘ reflects into the R0
path.
A striking feature of CEM is that the equations are all time-symmetric,
meaning that if you can run any solution to the equations in reverse, you’ll have
another solution. Imagining the time-reverse of Figure 1 is the easiest way to see
that these polarizing cubes can also combine two inputs into a single output
beam, as in Figure 2. Different input combinations will produce different output
polarizations τ. Putting all the input beam on L1 will force τ to be equal to
the angle σL; an L0 input will force τ = σL + 90∘; and an appropriate
mix of L1 and L0 can produce any polarization angle in between these
extremes.
One of Einstein’s great discoveries in 1905 – he thought of it as the most
important of all – is that CEM is not always correct. It breaks down when
the energy in the electromagnetic field is very small – i.e., at the levels
of Einstein’s light quanta, or photons, as we now call them. Einstein
himself does not seem to have thought about the implications of this
“quantization” for polarization, and the classic early discussion (to which
Einstein later refers approvingly) is due to Paul Dirac [9] in 1930. In terms of
our example, the crucial point is that in the case of a single photon,
the light passing through cube in Figure 1 will not split into two paths.
Instead, when measured, the photon is always found to be on R1 or on
R0, not on both. (The probabilities of these two outcomes are weighted
such that in the many-photon limit the CEM prediction is recovered, on
average.)
This change may seem innocuous, but it has profound implications. At the
very least, it shows that CEM fails and has to change. But how? Einstein
wrestled with the problem of how to combine quantization with the classical
theory for a few years, but eventually moved on to what he thought of
as the simpler problem of general relativity! As for Dirac, he simply
dodged the issue, by asserting that it wasn’t the business of physics to
look for a story about the underlying reality: “Questions about what
decides whether the photon [goes one way or the other] and how it changes
its direction of polarization when it does [so] cannot be investigated by
experiment and should be regarded as outside the domain of science.” ([9],
p. 6)
To the extent that quantum theory does provide us with a
picture, the orthodox view is to let CEM run its normal course
until the photon passes through the cube and encounters some
detectors.4
Then, at that very last moment, the measurement process “collapses” all the
light onto only one detector, either on R0 or R1, just as it arrives.
Einstein disliked this “collapse” – it is what he had in mind when
complaining about “spooky action at a distance” (since the R0 and R1
detectors may be far apart). He thought that the so-called collapse was
just the usual process of updating of our knowledge after acquiring new
information, and didn’t involve any real change in the physical system. And
he hoped for a theory providing some “hidden variables”, describing
how the light quanta actually behave. On such a view, presumably, the
photon will be regarded as “making a choice” as it passes through the
cube – though not necessarily a choice determined by any pre-existing
property.5
Neither Einstein nor Dirac seems to have considered the question of the
time-symmetry of these simple polarization experiments. On the face of it,
however, both a collapse picture and the “choosy photon” picture turn out to be
incompatible with the principle that the world is time-symmetric at a
fundamental level. To see what that means, and why it matters, let’s turn to
time-symmetry.
3 Time-Symmetry
How can it be that the laws of fundamental physics are time-symmetric? After all, as we already noted, many ordinary processes seem to have a clear temporal preference – like cream mixing in coffee, they happen one way but not in reverse. But by the end of the nineteenth century, most physicists had come to the view that these cases are not fundamental. They reflect the statistical behaviour of huge numbers of microscopic constituents of ordinary stuff, and the fact that for some reason we still don’t fully understand, stuff was much more organized – had very low entropy, as physicists say – some time in the past.6 But this explanation seems fully compatible with time-symmetry at the fundamental level.
This didn’t mean that fundamental physics has to be time-symmetric.
In principle, there might be some deep time-asymmetry, waiting to be
discovered. But by this stage, a century ago, the tide was running firmly in
the direction of fundamental symmetry. And certainly Einstein read it
this way. Although he recognised that time-asymmetric fundamental
laws were possible in principle, he thought them unlikely. Writing to
Michele Besso in 1918, he says, “Judging from all I know, I believe in the
reversibility of elementary events. All temporal bias seems to be based on
‘order’.”[10]
Since then, the evidence has changed in one important respect. We now know
that there is a subtle time-asymmetry in particle physics. But there is a
more general symmetry, called CPT-symmetry, that is widely regarded as
fundamental (and is equivalent to time-symmetry for the cases we are
discussing here). So for practical purposes, the consensus remains the
same: time-symmetry (or strictly speaking CPT-symmetry) is something
fundamental.
Still, this attitude to time-symmetry “in principle” often co-exists with a
distinctly time-asymmetric working model of the quantum world, as we can see
by thinking about how CEM is typically modified to explain Einstein’s photons.
It turns out that both the “collapse” view and the “choosy photon” view
described above are time-asymmetric.
To see why, we need to know that the crucial test for a time-symmetric
theory is not that everything that happens should look exactly the same in
reverse. For example, a rock travelling from Earth to Mars looks different in
reverse – it now goes from Mars to Earth – even though the underlying
mechanics is thoroughly time-symmetric. The test is that if the theory allows one
process then it also allows the reverse process, so that we can’t tell by looking at
a video of the process whether it is being played forwards or backwards. (If one
process isn’t allowed, then it is easy to tell: if the video shows the illegal process,
it must be being played backwards.)
With this in mind, think of a video of the apparatus shown in Figure 3, for a
case in which a photon came in from the L1 path and was then detected on the R1
path.7
This video shows the photon entering through a cube set to σL, exiting through a
cube set to σR, with a polarization τ = σL in between.
Reversing the video shows a photon entering the apparatus through a cube
set to σR, exiting through a cube set to σL, on the same paths as before, and still
with a polarization τ = σL in between. And this isn’t allowed by the theory
(except in the special case where σL = σR), according to either of these models.
On the contrary, if the photon enters through a cube set to σR, using
Path 1, the intermediate polarization should be τ = σR, not τ = σL. In
these models, then, time-reversing a legal process often leads to an illegal
process.
This time-asymmetry is entirely a result of quantization. In the classical case,
the light can always be split between the two paths, at one end of the
experiment or both. This ensures that the reversed version of any process
allowed by CEM in the apparatus in Figure 3 is also a legal process. In
the classical case, therefore, we can’t tell which way a video is being
played.
Somehow, then, combining discrete inputs and outputs with CEM destroys
time-symmetry (at least if one tries to preserve standard CEM between the two
cubes, as in the collapse view and the choosy photon view). In the next section
we’ll explore why this happened, and the surprising consequences of insisting that
time-symmetry be saved.
4 Control – Classical and Quantum
Does time-symmetry mean that the future controls the past just as much as the past controls the future? That can’t be true in general, because CEM is perfectly time-symmetric, and yet it is easy to see that we can use the apparatus in Figure 3 to send signals (and hence, if we wish, control things) from past to future – i.e., left to right – but not in the other direction. If we feed in a classical light beam on path L1, then by controlling the angle σL we can control τ, and hence encode a signal in the light beam. By measuring τ on the right hand side, we can then decode the message, or control some piece of machinery. But we can’t do that from right to left, of course, even though the equations of CEM are perfectly time-symmetric. So where does the difference come from?
The answer is that there’s still a big difference between the left and the right
of the experiment, the past and the future, despite the fact that the
laws of CEM are time-symmetric. On the left side, we normally control
whether the light comes from L0 or L1, and as we’ll see in a moment, this is
crucial to our ability to use the device to signal to the future. On the right
side, we don’t control how much of the light goes out via R0 and R1,
and it’s really this difference that explains why we can’t signal to the
past.
We can check this diagnosis by imagining a version of the full experiment in
which we don’t control the inputs on the left, so that our control, or lack of it, is
symmetric on the two sides. To help us imagine this, let’s introduce a Demon
(from the distinguished family whose ancestors have been so helpful at other
points in the history of physics!) Let’s give the Demon exactly the control on the
left that Nature has (and we lack) on the right. The Demon therefore
controls the light beam intensities on the two channels L0 and L1. It
knows what setting σL we have chosen for the angle of the left cube,
and shares with us the goal of having a single light beam emerge from
the cube in the direction shown. The question is, who now controls the
polarization τ? Is it us, or the Demon, or is the control shared in some
way?
It turns out that the Demon now has full control. The Demon can make τ
take any value, by choosing appropriate intensities for the two input beams on L0
and L1. So the Demon could use the device to signal to the future, by controlling
the polarization – but we can’t! (See Figure 5, in Box 2, for a simple
analogy – we are in the same position as Kirk, and the Demon is Scotty,
with full control of the direction of the ship, so long as she can use two
thrusters.)
We can now see why in the ordinary case, we can’t use the device in Figure 3
to signal to the past, even though the underlying physics is time-symmetric. The
problem is simply that we don’t control the outputs, the amount of light that
leaves the apparatus on the two beams R0 and R1. Nature controls that, not us.
If we make a change in σR, Nature makes a corresponding change in the
amount of light on the two beams R0 and R1, and nothing changes in the
past.
But now let’s see what happens if we “quantize” our Demon, by insisting that
it input a light beam on a single channel – either L0 or L1, but not both at the
same time. It turns out that this makes a big difference to what we can control
on the left side of the apparatus – though still not quite enough to enable us to
signal to the future!
With this new restriction on the Demon, choosing σL for the orientation of
the left cube means that there are only two possibilities for τ. Either τ = σL, in
the case in which the Demon uses channel L1, or τ = σL + 90∘, in the case in
which the Demon uses channel L0. This means that while we don’t control τ
completely, we do have a lot more control than before we quantized our Demon.
We can now limit τ to one of just two possibilities, 90∘ different from one
another. (See Box 2 again – this is like the case in which Scotty can only
use one thruster, giving Kirk some control over where the Enterprise
goes.)
Summing up, the quantization restriction on the Demon gives us some
control from left to right, just by controlling the angle σL, of a kind we don’t
have without quantization. But now symmetry shows that if Nature is quantized
in the same sense – if Nature has to put the output beam on R0 or R1 but not
both – then we have the same new kind of control from right to left, just by
controlling the angle σR. If the underlying physics is time-symmetric, and if the
outputs on the right are quantized, controlling σR gives us some control over the
polarization of the photon, immediately before it reaches the right hand
cube!
Box 2: How discreteness makes a difference.
If Scotty has variable control of forward–reverse and left–right thrusters (shown as red arrows), then it doesn’t matter which way Kirk points the Enterprise (e.g., towards A, in Figure 5) – Scotty can still make the ship travel in whatever direction she chooses, by putting an appropriate portion of the power on two different thrusters at 90∘ to each other. But if Scotty can only use one thruster, then the Enterprise has to travel either in the orientation that Kirk chooses (along the line AD, in one direction or other), or at 90∘ to that direction (along the line BC, in one direction or other). So Kirk now has at least some control of the ship’s direction, rather than none at all! This is exactly the difference that quantization makes, in the case of polarization. (For precision, we specify (i) that this Enterprise lives in Flatland, and hence travels in only two dimensions, and (ii) that Scotty cannot vary the power of the thrusters during use – any thruster in use burns for the same time, at its pre-set intensity.)
Figure 5 |
If Scotty has variable control of forward–reverse and left–right thrusters (shown as red arrows), then it doesn’t matter which way Kirk points the Enterprise (e.g., towards A, in Figure 5) – Scotty can still make the ship travel in whatever direction she chooses, by putting an appropriate portion of the power on two different thrusters at 90∘ to each other. But if Scotty can only use one thruster, then the Enterprise has to travel either in the orientation that Kirk chooses (along the line AD, in one direction or other), or at 90∘ to that direction (along the line BC, in one direction or other). So Kirk now has at least some control of the ship’s direction, rather than none at all! This is exactly the difference that quantization makes, in the case of polarization. (For precision, we specify (i) that this Enterprise lives in Flatland, and hence travels in only two dimensions, and (ii) that Scotty cannot vary the power of the thrusters during use – any thruster in use burns for the same time, at its pre-set intensity.)
To visualise this possibility, we need to modify our diagram a little. In place
of the original τ, controlled by σL, let’s now write τL. And let’s introduce a new
label, τR, for the property that must be controlled by σR, if time-symmetry is to
be preserved. This gives us Figure 4 – we place τL and τR at left and right,
respectively, next to the piece of the apparatus by whose setting they
are (partially) controlled. (We’ll come back to the question what might
happen in the middle – for the moment the “?” stands for “unknown
physics”!)
Summing up, we saw that the quantization restriction on the Demon gave us
some control over τL by manipulating σL. Time-symmetry implies that an
analogous quantization restriction on Nature gives us control over τR by
manipulating σR. Again, we can’t control τR completely, but we can limit
it to two possibilities: either τR = σR or τR = σR + 90∘. (Note that
this argument doesn’t depend on the details of QM in any way – we
would have this new kind of control in CEM, if Nature cooperated in this
way.)
4.1 In Einstein’s shoes
Now think about this argument from Einstein’s perspective, in 1905, after he’s
discovered that light is quantized, and before he’s heard about quantum
mechanics – it lies twenty years in the future, after all! He knows that there’s a
minimum possible amount of light that can pass through the apparatus, what we
now call a single photon.
For a single photon, it seems, the quantization restriction must hold. The
photon must travel on one path or other, at both ends of the experiment. And
that means that it is subject to this new kind of control – the control
that results from quantization – in both directions. (We are relying on
time-symmetry here, of course – if the laws governing the photon are allowed to
be time-asymmetric, one sort of control, left to right, or right to left, can
vanish.)
This simple argument shows that if we accept assumptions that would have
seemed natural to Einstein after 1905, we do have a subtle kind of influence over
the past, when we choose such things as the orientation of a polarizing cube. This
is precisely the possibility called “retrocausality” in contemporary discussions of
QM. As we noted above, most physicists think that they are justified in ignoring
it, and using “No retrocausality” as an uncontroversial assumption in various
arguments intended to put further nails in the coffin of Einstein’s view of the
quantum world. And yet it turns out to have been hiding just out of sight, all
these years!
4.2 Spooks against retrocausality
Defenders of orthodox QM, the view that Einstein came to dislike, are not going
to be impressed by this argument. They’ll point out that quantum theory allows
us to avoid one or both of the two crucial assumptions, of time-symmetry and
discrete (or quantized) outcomes. Both escape routes depend on the quantum
wave function – the strange object whose properties Einstein disliked so
much.
On some views, the wave function undergoes time-asymmetric “collapse”,
and so the underlying physics is not regarded as time-asymmetric. On
other views, the wave function itself divides between the two output
channels, R0 and R1. By varying the “proportion” of the wave function on
each channel, Nature then has the flexibility she needs to respond to
changes we make the angle σR, preventing us from controlling the past.
Either way, these versions of QM can escape the argument, and avoid
retrocausality.
But this escape from retrocausality rides on the back of the wave function –
the spooky, nonlocal object, not respectably resident in time and space, that
Einstein deplored in his letter to Born. From Einstein’s point of view, then, the
argument seems to present us with a choice between spooks and retrocausality.
We cannot say for certain which option he would have chosen – “There is
nothing one would not consider when one is in a predicament!”, as he
put it at one point, writing about the problem of reconciling CEM with
quantization [12] – but the choice makes it clear that anyone who simply
assumes “No retrocausality” in arguing with Einstein is tying their famous
opponent’s arms behind his back. In the light of the argument above,
retrocausality is the natural way of avoiding the spooks. To rule it out
before the game starts is to deprive Einstein of what may be his star
player.
4.3 Control without signalling
At this point, Einstein’s contemporary opponents are likely to object that
retrocausality is even worse than the so-called quantum spooks. After all, doesn’t
it lead to the famous paradoxes? Couldn’t we use it to advise our young
grandmother to avoid her unhappy marriage to grandfather, for example, and
hence ensure that we had never been born?
This is where the subtle nature of the new kind of control allowed by
quantization is important. Let’s go back to the case of the quantized Demon, who
controls the inputs on the left side of Figure 4, but can only use one
channel at a time: all the input must be on either L0 or L1, and not on
both at the same time. We saw that we do have some control over the
polarization τL in this case, but not complete control. The Demon always
has the option of varying the polarization we aim for by 90∘. But this
means that we can’t control the polarization enough to send a signal –
intuitively, whatever signal we try to send, the Demon always has the
option of turning it into exactly the opposite signal, by adding a factor of
90∘.8
So the kind of control introduced by quantization is too subtle to allow us to
signal left to right, in the experiment in Figure 4. Hence, by symmetry, it is also
too subtle to allow us to signal from right to left, or future to past. We couldn’t
use it to send a message to our grandmother, and history seems safe from
paradox. Despite this, we can still have some control over hidden variables in the
past, of the kind needed for retrocausality to resolve the challenge of Bell’s
Theorem in Einstein’s favor.
5 Where next?
So discussions of the meaning of quantum theory might have been very different, if Einstein had noticed how quantization suggests retrocausality. But would it have made any difference in the end? Is there any practical way to rescue Einstein’s dream of a spook-free quantum world, if we do allow it to be retrocausal?
As we noted earlier (see Box 1), there are several retrocausal proposals on the
table. But some, like the Aharonov-Vaidman Two State proposal, or the earlier
Transactional Interpretation, try to build their retrocausal models with the same
kind of elements that Einstein objected to – wave functions not properly located
in space and time.
If we want to stay close to the spirit of Einstein’s program, then,
we’ll need to start somewhere else. One such proposal aims to extract a
realistic model of quantum reality from the techniques developed by
another of the twentieth century’s quantum giants, Richard Feynman.
Feynman’s “path integral” computes quantum probabilities by considering
all the possible ways a quantum system might get from one event
to another, and assigning a weight, or “amplitude” to each possible
“path”.9
The path integral is usually regarded simply as a calculational device.
Most people, including Feynman himself, have thought that it doesn’t
make sense to interpret it as telling us that the system really follows
one particular history, even if we don’t know which. Attractive as
this thought seems in principle, the probabilities of the various paths
just don’t add up in the right way. But there are some clues that
this might be made to work, using non-standard versions of the path
integral.10
Applied to the case we have been discussing, this approach suggests that
electromagnetic fields are not strictly constrained by the equations of CEM (a
possibility that Einstein certainly contemplated). Instead, the path integral
considers all possible field histories, even apparently crazy cases where a photon’s
polarization rotates from τL to τR in empty space. In fact, only by considering
such non-classical cases can this approach make accurate predictions. Such a
scenario, involving a real polarization rotation between the cubes, is exactly the
sort of process that could restore time-symmetry to the case of single
photons.
Any approach that does succeed in interpreting the Feynman path integral
realistically – i.e., that makes sense of the idea that the system actually follows
just one of the possible histories that make up the path integral – is likely to be
retrocausal. Why? Simply because we have control over some aspects of the
“endpoint” any path, when we choose to make a particular measurement (e.g.,
when we choose σR in Figure 4). In general, our choice makes a difference to the
possible paths that lead to this endpoint, and if one of those paths must be the
real path followed by the photon, then we are affecting that reality, to some
extent.
In putting future and past on an equal footing, this kind of approach is
different in spirit from (and quite possibly formally incompatible with) a more
familiar style of physics: one in which the past continually generates the future,
like a computer running through the steps in an algorithm. However, our usual
preference for the computer-like model may simply reflect an anthropocentric
bias. It is a good model for creatures like us, who acquire knowledge sequentially,
past to future, and hence find it useful to update their predictions in the same
way. But there is no guarantee that the principles on which the universe is
constructed are of the sort that happens to be useful to creatures in our
particular situation.
Physics has certainly overcome such biases before – the Earth isn’t the center
of the universe, our sun is just one of many, there is no preferred frame of
reference. Now, perhaps there’s one further anthropocentric attitude that needs
to go: the idea that the universe is as “in the dark” about the future as we are
ourselves.
It is too early to be sure that the spooks are going to be driven out of the
quantum world; too early to be entirely confident that retrocausality will rescue
Einstein’s dream of a relativity-friendly quantum reality, living in space and time.
What is clear, we think, is that there are some intriguing hints that point in
that direction. Most intriguing of all, there’s a simple new argument
that suggests that reality must be retrocausal, by Einstein’s lights, if we
don’t go out of our way to avoid it. It is too early to award Einstein
the last laugh, certainly; but too soon to dismiss the dreams of the old
giant.
1 The same loophole exists in the other so-called “No Hidden Variable theorems”, that seek to prove that there can’t be a deeper reality underlying quantum theory, of the kind Einstein hoped for. For simplicity we’ll focus on Bell’s Theorem, but the spook-dispelling technique we’re talking about works equally well in the other cases.
2 Bell himself certainly thought there was such a tension – as he puts it at one point, “the cheapest resolution [of the puzzle of nonlocality] is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincaré thought that there was an aether.”[5]
3 Quantization does so via the argument below. Relativity does so via Bell’s Theorem, since the retrocausal loophole provides a way to escape the apparent tension between Bell’s result and relativity.
4 Technically, the orthodox view uses an equation that evolves a quantum state instead of electromagnetic fields, but the sentiment is the same.
5 As Einstein says, “Dirac ... rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass [one way or the other through] a polarizer placed (obliquely) in its way.”[11]
6 See Sean Carroll’s blog [bit.ly/SCeternity] for an accessible introduction.
7 We can’t literally make videos of individual photons, but we can make computer- generated videos showing what our theories say about photons, and they work just as well, for applying this test.
8 More formally, this is a consequence of the so-called No Signalling Theorem. A proto- col that allowed us to signal in this case would also allow signalling between two arms of an analogous entanglement experiment with polarizers, in violation of the No Signalling Tteorem – for the correlations in the two cases are exactly the same.[13]
9 These “paths” are not necessarily localized trajectories of particles, but could be the entire “histories”, between the two events in question, of something spread out in space, such as an electromagnetic field. (This extension from the particle-path integral to a field-history integral is used in quantum field theory.)
10 As in a few recent efforts [14][15]; see also [16].
Notes
1 The same loophole exists in the other so-called “No Hidden Variable theorems”, that seek to prove that there can’t be a deeper reality underlying quantum theory, of the kind Einstein hoped for. For simplicity we’ll focus on Bell’s Theorem, but the spook-dispelling technique we’re talking about works equally well in the other cases.
2 Bell himself certainly thought there was such a tension – as he puts it at one point, “the cheapest resolution [of the puzzle of nonlocality] is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincaré thought that there was an aether.”[5]
3 Quantization does so via the argument below. Relativity does so via Bell’s Theorem, since the retrocausal loophole provides a way to escape the apparent tension between Bell’s result and relativity.
4 Technically, the orthodox view uses an equation that evolves a quantum state instead of electromagnetic fields, but the sentiment is the same.
5 As Einstein says, “Dirac ... rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass [one way or the other through] a polarizer placed (obliquely) in its way.”[11]
6 See Sean Carroll’s blog [bit.ly/SCeternity] for an accessible introduction.
7 We can’t literally make videos of individual photons, but we can make computer- generated videos showing what our theories say about photons, and they work just as well, for applying this test.
8 More formally, this is a consequence of the so-called No Signalling Theorem. A proto- col that allowed us to signal in this case would also allow signalling between two arms of an analogous entanglement experiment with polarizers, in violation of the No Signalling Tteorem – for the correlations in the two cases are exactly the same.[13]
9 These “paths” are not necessarily localized trajectories of particles, but could be the entire “histories”, between the two events in question, of something spread out in space, such as an electromagnetic field. (This extension from the particle-path integral to a field-history integral is used in quantum field theory.)
10 As in a few recent efforts [14][15]; see also [16].
References
[1] Einstein, A., Letter to Max Born, 3 March 1947, published in Max Born, ed., The Born-Einstein Letters: Friendship, Politics and Physics in Uncertain Times (Macmillan, 1971), p. 178.
[2] Albert, D. Z. & Galchen, R.. ‘A quantum threat to special
relativity’, Scientific American, 300, 32–39 (2009).
[3] Price, H., ‘Does time-symmetry imply retrocausality? How the
quantum world says “maybe”’, Studies in History and Philosophy of
Modern Physics, 43, 75–83 (2012). arXiv:1002.0906
[5] Bell, J. S., Interview published in P. C. W. Davies and
J. R. Brown, eds., The Ghost in the Atom (Cambridge 1986), 48–9.
[6] Bell, J. S., Letter to H. Price, 8 June 1988, quoted in Huw Price,
Time’s Arrow and Archimedes’ Point (Oxford University Press, 1996).
[11] Einstein, A., ‘Maxwell’s influence on the idea of physical reality’,
in James Clerk Maxwell: A Commemorative Volume (Cambridge, 1931).
Reprinted in Einstein’s Ideas and Opinions (Bonanz, 1955), pp. 266–70.
[13] Evans, P. W., Price, H. and Wharton, K. B., ‘New slant on the
EPR-Bell experiment’, Brit. J. Phil. Sci. 64, 297 (2013). arXiv:1001.5057
Subscribe to:
Posts (Atom)