Review of Entanglement


The author, one of my Facebook friends, died almost one year ago. This is one of his best books. Amir, like the fine science writer he was, covered a diversity of topics well – including my favorite, mathematics. I enjoyed his Fermat’s Last Theorem: Unlocking the Secret of an Ancient Mathematical Problem, even though it’s a bit short and superficial – and he confused elliptic curves and elliptic functions. (A common error.) I’m looking forward to reading My Search for Ramanujan, which Amir co-authored with Ken Ono. Since Ono is a world-class expert on topics that interested Ramanujan, such as partition theory, their book should be excellent.

Like all books Amir wrote himself, Entanglement is at an elementary level, intended for general readers. I think it does an excellent job with the subject matter. There are, however, better books on the subject of quantum entanglement for readers who are more conversant with quantum mechanics, such as Nicolas Gisin’s Quantum Chance: Nonlocality, Teleportation and Other Quantum Marvels, which I reviewed here. Gisin led a group that performed a key experiment that verified the nonlocal character of quantum entanglement. That experiment receives a whole (short) chapter in Amir’s book. But what may be the most noteworthy aspect of the book reviewed here is that it also describes several other very important experiments on entanglement, of which I don’t know any other treatment at the elementary level. Among these is a fairly detailed treatment of “triple entanglement”. That involves three entangled particles instead of the much more usual two. It’s an important experiment, since it clearly demonstrates the inherent nonlocality of quantum entanglement, without having to rely on Bell’s Theorem.

Amir’s book starts out slowly and covers all the usual history of quantum mechanics, from Planck and Einstein onward. This includes the customary topics, such as the contributions of Bohr, Heisenberg, de Broglie, Schrödinger, von Neumann, Dirac, Bohm, and Wheeler. This early history occupies the first half of the book, so that readers who’re already familiar with the history may become impatient to get to the more recent “good stuff”.

Erwin Schrödinger discovered the phenomenon of entanglement, and considered it the most significant aspect of quantum mechanics. He wrote “Entanglement is not one but rather the characteristic trait of quantum mechanics.” Entanglement depends on another aspect of quantum mechanics, “superposition”, which had been discovered earlier. The two terms should not be confused even though they are closely related. Superposition is the strange property that one or more quantum particles may have, in which the particles can actually be in multiple potentially observable simple states at the same time. (This property is what makes quantum computers possible and interesting.) Two particles that have interacted at one point in time can be in a joint state that is a superposition of states of both particles. In this case, the particles may be “entangled”, so that a measurement of one particle will instantaneously determine what will be measured of the other particle, even if the two are millions of light-years apart when measured.

Obviously, this is quite a peculiar phenomenon, and it sorely perplexed Einstein. Nevertheless, it has been rigorously verified numerous times since the 1970s. The details of many of these experiments occupy the second half of Amir’s book. This part of the book requires much more careful reading than the first half, but it’s well worth the effort. If one has any interest in learning about recent practical aspects of quantum technology – such as quantum computing, quantum cryptography, and quantum teleportation – these are the details that one needs to develop a good understanding of. A reader can feel confident of the book’s accuracy in dealing with these topics, since the author spent quite a lot of time discussing the experiments with a number of physicists who actually performed them.

Posted in Book review, Physics, Quantum theory | Tagged , , | Leave a comment

Review of Chance and Chaos


David Ruelle’s book is only 166 pages (excluding the extensive notes). In that space it covers an astonishing number of related topics. However, for a general reader interested in chaos and complexity theory, it’s not the best place to start. Topics such as “strange attractors” and the “butterfly effect” came to public attention with James Gleick’s Chaos almost 3 decades ago. Since then many books on the subject for general readers have appeared, such as John Gribbin’s Deep Simplicity. I’d recommend either or these or similar books as a suitable place to get into the subject.

The discussion in the book under review here is able to be concise for two reasons. Firstly, it assumes some acquaintance with basic physics. Comfort with college-level mathematics is helpful but less essential, since the details are mostly confined to footnotes at the end of the book. Secondly, there is no attempt to go into much depth on any of the topics. Instead, the reader gets a lucid bird’s-eye view of each topic, and the various threads that connect together the theory and potential applications.

Here’s an incomplete list of the topics that are touched upon: determinism, probability, game theory, turbulence in fluid flow, sensitive dependence on initial conditions, strange attractors, chaos, economics, quantum theory and indeterminacy, entropy, irreversibility, statistical mechanics, phase transitions, information theory, complexity, computability, biological evolution and the importance of sexual reproduction, and intelligence. (If one wants to quickly find references to each topic, it’s unfortunate that the book lacks an index.)

Ruelle, a mathematical physicist, is very well qualified to write on the physics-based topics. He was an important contributor to the theories of turbulence and statistical mechanics, and shares credit for the term “strange attractor”. (An “attractor” is a relatively stable state towards which a dynamical system may evolve. It is “strange” if it has a fractal structure.) Gleick’s book (Chaos) mentions Ruelle frequently and explains his contributions.

I give the book a top rating for its clarity and breadth, in spite of its brevity. It provides a fine orientation for a reader who wants to pursue any of its topics more deeply.

Posted in Book review, Mathematical physics, Physics | Tagged , , | Leave a comment

Another review of Quantum Chance, by Nicolas Gisin

In the world at the scale of things the size of a few atoms or less, the rules of physics are quite different from what they are in the world of everyday experience. Quantum mechanics lays out the rules that apply to the smallest scales. Many strange phenomena are found in this realm, including indeterminacy, tunneling, superposition of states, entanglement, and nonlocality. The last of these, nonlocality, refers to a situation in which a measurement of certain properties of one particle seems to affect a related measurement of a particle “entangled” with it, despite a distance between the particles vastly larger than their size. (Entanglement of two or more particles means that some properties of each particle aren’t independent of each other and can be affected by a measurement of the property on any particle in the group.)

Nonlocality is perhaps the strangest quantum phenomenon, and it’s the main topic of the book by Nicolas Gisin reviewed here. If you’d enjoy reading a book that explains actual cutting edge quantum physics – one that doesn’t even require any sophisticated math – this one may be for you. The physics is cutting edge in the sense that just in 2015 the results of several difficult experiments were published, removing almost all doubt that the mysterious phenomenon of nonlocality is a real thing. (References and some details on the experiments can be found here.)

Nonlocality is so counter-intuitive that even Einstein couldn’t think of any better description other than the unfortunate and deeply misleading “spooky action at a distance”. This phrase is used carelessly by writers on quantum physics who have much less personal expertise in the subject than Gisin. What’s so wrong about it is that there isn’t really any “action” involved in the phenomenon at all. Instead, what experiments now confirm is that measurements involving entangled quantum particles such as photons and electrons can occur that are more highly correlated than possible without either any direct communication between the particles or even any “hidden variables” (unknown to physics) to explain the correlation.

It’s necessary to be clear about what’s meant by “correlation”. With a particle like an electron, Heisenberg’s uncertainty principle says there are limits to how precisely certain pairs of quantities such as position and momentum can be measured. For example, if the position of a particle is very precisely known, then the momentum must be very indeterminate. Two particles can be entangled so that they should have the same momentum. If the position of one of the two particles is measured very precisely so there is very little uncertainty about it, then there must be a large uncertainty about its momentum. If the momentum of the second particle is then measured there can’t be only a little uncertainty about it, as otherwise there would be little uncertainty about the momentum of the first particle, which isn’t the case. This dependence of measurements of one particle on measurements of the other is what is meant by correlation.

More generally, there are various pairs of measurements that can be made on different types of entangled particles, and there are ways to quantify the maximum correlation that can exist statistically between one specific measurement on the first particle of a pair and a related measurement (but not necessarily the same one) on the second particle. Physicist John Bell in 1964 proved a theorem (called Bell’s inequality) that puts an upper limit on the maximum value of a certain quantity (that reflects the correlation, but is larger if the correlation is less) allowed by quantum mechanics if there are “hidden variables” (properties that two particles share even if they can’t be measured). But this limit exists only if there is no way for one measurement to “know” about the type and results of the other one.

Consider an experiment where the choice of which measurement of a pair of related measurements is completely random with a probability of ½ for either measurement. Suppose it is the momentum of one particle of an entangled pair that is actually measured. Quantum mechanics requires that no physical measurement can be 100% precise, but the uncertainty can be very small. By the uncertainty principle the other measurement (i. e. position) of the same particle must be very uncertain.

Now suppose there there is a hidden variable that forces either measurement on the second entangled particle to have the same degree of uncertainty as for the first particle. Suppose the momentum of the first particle is measured with very little uncertainty. If the momentum of the second particle happens to be measured, it must be the same as for the first, to within a very small amount. That would give almost a perfect 100% correlation. But if the alternate measurement (i. e. of position) on the second particle happens to be made (with probability ½), the measurement must have large uncertainty just as it would if it had been done on the first particle. So if this process is repeated many times, the spread of values of measurements on the second particle would be very large, which reduces the average correlation that is found after many trials.

Finally, suppose that this experiment is done with the particles so far apart in space and/or time that special relativity prevents every instance of measurement on the second particle from knowing which of the measurements was made on the first or what the result was. That is, there can’t be any communication of information between the two measurements in each cycle. The bottom line is that if there are hidden variables of the type supposed, then the upper limit of the quantity reflecting the correlation between measurement results over a large number of trials is lower than if there are no such hidden variables.

This does assume that the hidden variables are such that information about them can’t propagate faster than the speed of light, in violation of special relativity. (Later theoretical studies have shown it’s enough to assume information can’t propagate at a finite speed below a certain amount that is much higher than the speed of light.) Such variables are called “local” hidden variables. The only way to violate Bell’s inequality – as long as both special relativity and quantum mechanics are correct – is for local hidden variables not to exist. And if the inquality is found by experiment to be violated, then “nonlocality” must be a real phenomenon.

It’s still possible that there could be hidden variables whose values could change instantaneously across the whole universe – “nonlocal” hidden variables – though most physicists consider that implausible. Bell’s theorem wouldn’t apply if such variables exist. No one can imagine how to test for such things, as measurements would have to be made on particles arbitrarily far apart – since such measurements could possibly be correlated more than expected.

The first experiment that clearly tested Bell’s theorem in the laboratory was performed by Alain Aspect and reported in 1982. It showed that Bell’s inequality in fact can be violated. Many experiments since then have supported the same conclusion, and have successively ruled out all “loopholes” that might have invalidated the conclusion. Typically, such loopholes involve either technological limits or else ways that two independent measurements could know more than they “should” about each other.

The earliest experiments showing a violation of Bell’s theorem involved measurements made on particles close enough together that signals could pass between them at less than or equal to the speed of light. That left open the possibility that local hidden variables could explain the result, however unlikely the existence of such variables seemed. But eventually experimental technology allowed experiments in which entangled particles could be measured even when the particles are so far apart at the the time of measurement that information transfer at less than the speed of light is impossible. (The upper speed limit, much higher than the speed of light, now allowed by theoretical studies is far beyond what can be tested experimentally.)

Another technological loophole has been eliminated by measurements on electrons instead of photons as in most earlier experiments. Individual photons are hard to detect, so many may be undetected, hence there are additional statistical uncertainties. Individual electrons are much easier to detect but are harder to work with for other reasons, so more advanced technology is required. Yet quite recently an experiment using electrons has seemingly eliminated all plausible loopholes. So correlations that violate Bell’s inequality in such cases demonstrate that nonlocality is a fundamental feature of quantum mechanics.

Although good explanations of nonlocality don’t currently exist, Gisin’s book probably does the job of describing it better than any other book yet published. Very little mathematics is required to understand the description, except for a few elementary probability calculations. On the other hand, the logic necessary to understand the description is more of a challenge, especially in the second chapter about the logic that leads to Bell’s Inequality. But if you take the time to follow that discussion carefully, you’ll be all set.

The reward a reader can expect from this book is an appreciation for perhaps the deepest and most unexpected discovery in quantum physics: the now experimentally confirmed fact that quantum reality is nonlocal in an essential way. Appropriate measurements made simultaneously on entangled particles are better correlated than Bell’s theorem allows, even if the particles happen to be on opposite sides of the universe. Nobody knows how this can be the case – yet it is.

This fact is of far more than just theoretical interest. Entanglement of particles and the consequences for measurement of them is fundamental to such now existing technologies as 100% secure quantum encryption and communication. And also for the possibility someday of quantum computers that can solve certain problems “exponentially” faster than existing “classical” computers.

Chapter 2 is the heart of the book. It explains the reasoning behind the violation of Bell’s inequality in a way that doesn’t involve quantum theory at all. Nevertheless, careful attention is needed to follow the explanation. It would have been nice if the author had shown how the same conclusion follows from reasoning based on standard notation for quantum qubits. This would have been useful when the so-called “no cloning” theorem, quantum cryptography, and quantum communication are discussed later in the book. In fact, a significant shortcoming of the book is that presentations of these topics is somewhat superficial, so the reader doesn’t receive as clear a picture of such things as would be desirable. But that’s the trade-off for keeping the book rather brief – a mere 110 pages (excluding front material).

The book’s brevity is also the reason there’s no discussion of quantum computing at all. The use of quantum qubit notation would have helped a lot for that, and also for explaining how entanglement is essential for making quantum logic gates, which are the basic building blocks of a quantum computer. However, the book’s focus is on the topic of nonlocality, which plays little role in quantum computing.

Another result of the brevity is the lack of an index, and this is especially unfortunate, since there will be many times a reader might want to refer back to points raised earlier. And even though some references are cited in footnotes, there is no bibliography, which would provide references that enable a reader to learn much more about topics the book doesn’t treat in much depth.

Posted in Book review, Physics, Quantum theory | Tagged , , | Leave a comment

Review of Quantum Chance, by Nicolas Gisin

The author, Nicolas Gisin, is a world-class expert in the subject of the book’s subtitle: quantum “nonlocality, teleportation, and other quantum marvels”. He was a principal investigator of an experiment – performed in 1997 near Geneva, Swizerland – that gave nearly watertight evidence for one of the strangest properties of quantum theory: “nonlocality”. This is the main topic of the book, which was published in 2014 and is just about the most recent one that covers the subject for the non-professional reader.

What is “nonlocality”? Answering that question, in terms an educated layperson can understand, is the purpose of the book. All I can do here is try to indicate the gist of the matter. In 1935 Albert Eistein, with two collaborators (Boris Podolsky and Nathan Rosen) published a paper describing a thought experiment that seemed to show an inconsistency between quantum mechanics and special relativity. This became known as the “EPR paradox”. The thought experiment was based on a pair of elementary particles, such as electrons, that had been prepared in what is called an “entangled” state. That means any measurement made on certain properties of one particle must be correlated with a related measurement of properties of an entangled particle (as long as they don’t interact with any other particles). And this must be true no matter how far the particles are separated in space and time when the measurements are made.

The possibility of entanglement of quantum particles is predicted by quantum theory. However, because of Heisenberg’s uncertainty principle it leads to an apparent paradox if the particles happen to be far enough apart at the time of measurement so that no signal can pass between them in the time between the two measurements. That’s because Einstein’s Special Theory of Relativity requires that no information can be transmitted from one point to another faster than the speed of light.

When certain measurements are made on one particle of an entangled pair, then a related measurement on the other particle must turn out in a specific way no matter how far apart the particles are. And the result of measuring the second particle would probably have been different if the first particle hadn’t been measured. In other words, a measurement at one place seems able to affect instantaneously a measurement somewhere perhaps very distant, in apparent violation of the special theory of relativity.

This also seems to violate the kind of indeterminacy required by quantum mechanics, because typically measurement of quantum properties can yield a number of distinct results, each with a particular probability. More specifically, a quantum particle can be in a “superposition” of distinct “pure states”. After the measurement, the particle will be in only one of the pure states, with perhaps a different probability for each result. However, for particles that are entangled, the result of the first measurement can uniquely determine the result of the second. It is as though information about the first measurement somehow influences the other one.

Yet if the particles are so far apart that no information can pass from one particle to the other at less than or equal to the speed of light, then there is no way the second particle can know what measurement was made on the first particle or what the results were. To Einstein and many others it appeared that the only way out was that there must be some sort of unknown property (a “hidden variable”) that was shared by both particles.

In order to perform an experiment to test this hypothesis (so that quantum mechanics and special relativity would be compatible) it was necessary to be able to create appropriate pairs of entangled particles, separate them at a distance large enough that no information could propagate from one to the other in the time between measurements, and derive an estimate of how much statistical discrepancy was possible between the measurements.

At the time of the EPR paper in 1935, physicists had no idea how to meet all those conditions. There were two reasons. First, there was no adequate technology then for creating and measuring entangled particles (e. g. electrons or photons) in order to carry out the experiment. Second, there was no clear idea of what sort of correlation would show whether or not hidden variables existed. The problem is that some correlations (greater than zero) could be expected depending on the nature of the experiment. (If two people each toss a coin, the results will agree half the time, a 50% correlation, even though it’s completely random.)

The second problem was solved in 1964 by physicist John Bell. He proved mathematically that there was a numerical upper bound to a certain quantity that reflects correlations of the measurements, assuming that hidden variables exist and the rules of both special relativity and quantum mechanics applied. This numerical relation is called “Bell’s inequality”.

In the early 1980s, Alain Aspect (who much later wrote the foreword to Gisin’s book) and others were able to perform an actual experiment – and it was found that in fact Bell’s inequality was violated. When a series of many tests were performed in which one of two related measurements were performed randomly (50% probability) on separated but entangled particles, the relevant quantity was larger than allowed by Bell’s inequality if hidden variables existed. This cast substantial doubt on the idea of hidden variables (but only if they were “local”, meaning they couldn’t communicate information about their values at faster than the speed of light).

It’s an important feature of such experimenta that there are two possible measurements that can be made. A hidden variable shared by the entangled particles might determine what outcome should occur for each measurement. Also, a hidden variable, if it existed, might carry information about which measurement was made on the first particle or the result. But the hidden variables assumed in Bell’s theorem are local. Hence if the measurements are made when the particles are so far apart that no information from one measurement can reach the other before it’s measured, then there is no way the second particle to be measured can “know” what measurement was made on the first one. The second particle “sees” the first particle as it existed before any measurement was made. (The existence of “nonlocal” hidden variables, whose information could be accessible everywhere in the universe instantaneously, is conceivable, but that’s considered a very far-fetched possibility.)

More recently, Gisin’s team and others have performed more rigorous experiments that make the conclusion even more watertight. Until quite recently there had been certain “loopholes” in experiments, related to the details of the experiment, that could cast doubt on the validity of the findings. But in 2015 results were published of three separate experiments that seem to close all loopholes physicists consider plausible. The conclusion is quite clear: Bell’s inequality is violated. So unless either quantum mechanics or special relativity is wrong, there cannot be any “local hidden variables”.

Since the particles in the recent experiments are sufficiently far apart when they are measured, special relativity doesn’t allow information to pass from one measurement to the other. But since the measured correlations are found despite the large distance, the resulting correlation is called “nonlocal”. In principle, it seems as though the separation can be as great as the size of the visible universe. This is why nonlocality, though now confirmed, is such a mysterious finding.

The key element in these experiments is the entanglement between the particles being measured. The modern theory of quantum mechanics was formulated a few years before 1930, and physicists knew about entanglement very soon afterwards. But it wasn’t considered especially important or consequential before the EPR paper was published. Very few physicists saw its importance at an early date. Erwin Schrödinger was among the few, writing:

Entanglement is not one, but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought.

Today, the phenomenon of entanglement is of utmost importance, not only theoretically, but also for practical applications. Without entanglement quantum communication, quantum cryptography, quantum “teleportation”, and quantum computing would all be impossible. Some of these things are already commercially available products, not just laboratory curiosities.

Gisin’s book further explains things mentioned in this review, especially the nature of the experiments that showed violations of Bell’s inequality. However, not all of these topics are covered as thoroughly as they should be for a proper understanding. Quantum computing, for example, is hardly covered at all. The book’s brevity (about 110 pages, not counting front material) is a virtue, in that it allows a focus on the central topic. But it’s also a drawback because of how much of the whole story is omitted.

Here then are some shortcomings that you should be aware of.

1. The book uses hardly any mathematics at all, except for some simple probability arguments. Relatively simple mathematics could have been used effectively to better explain entanglement, Bell’s inequality, and how things like quantum teleportation and quantum computing work.

2. The most difficult chapter in the book describes a “game” having a structure that reflects what is done in actual experiments. It may be more understandable, since the mathematics is slightly more transparent, but the game differs a lot from the physical experiments. It doesn’t involve quantum concepts at all.

3. There is a very short discussion of quantum cryptography, but it’s much too short for even a rudimentary understanding of how it works. So a reader never learns how quantum encryption makes it possible to detect any eavesdropping on a conversation.

4. There is no index or bibliography in the book. Lack of an index makes it difficult for a reader to locate where a particular topic or concept has previously been explained. Although some important references are given in footnotes, there’s little help for a reader who wants to go more deeply into various important topics. This is especially a problem, since the book itself doesn’t try to cover many topics in any real detail.

Despite these shortcomings, what the book does offer is very well done.

I’ve written another review of Gisin’s book that goes into more a little more detail on some of the relevant physics. It’s here.

Posted in Book review, Physics, Quantum theory | Tagged , , | Leave a comment

The return of “Science and Reason”

From 2005 until 2012 I operated the Science and Reason blog hosted at Blogspot. In 2012 I took a leave of absence due to lack of time and increasing dissatisfaction with Google’s Blogger software. It’s all still there at the old location, but I’m starting over, for new material, with WordPress – a far superior platform. I probably won’t post here as often as before, since time is still a limited resource. But I do have material to write about for which this seems like a good choice.

In recent years I’ve also started other science-oriented blogs, but again haven’t continued to update them often. They are:

  1. Science Briefs (Tumblr) – brief notes about science news
  2. Today’s Science (WordPress) – in-depth accounts of scientific research
  3. Mathophilia (WordPress) – mostly exposition of topics in advanced mathematics

Of these I fully expect to add a lot to Mathophilia (and this blog). The others, probably not so much.

Posted in Uncategorized | Leave a comment