Entanglement preservation: The Sleeping Beauty approach NATURE PHYSICS | VOL 8 | FEBRUARY 2012 | www.nature.com/naturephysics 107 news & views A new and rather curious way of suppressing decoherence in entangled qubits has been demonstrated by Yong-Su Kim and colleagues1, as they now report in Nature Physics. The team realized the experiment using entangled photons; however, the concept is easier to understand in terms of the solid- state qubits for which it was originally proposed2. The method combats so- called amplitude-damping decoherence, which in solid-state language is energy relaxation from the excited state of a qubit to its ground state. With some oversimplification, the idea is first to intentionally move each qubit close to its ground state by using a weak measurement; this can be thought of as a partial wavefunction collapse3. Decoherence is naturally suppressed in this ‘lethargic’ state, and the quantum state is therefore preserved for a long time with little deterioration. Then, when needed, the initial quantum state is restored by using another partial measurement of each qubit that reverses4,5 the effect of the first measurement; this can thus be called ‘unmeasurement’ or ‘uncollapse’6. The idea is pretty simple, but relies on the weirdness of weak quantum measurement. It works for an arbitrary initial state2, and the experimental procedure for each entangled qubit 1 is essentially the same as for one-qubit state preservation7. In some sense the idea is borrowed from the Sleeping Beauty story 8. A young princess is preserved for 100 years, which is only possible because she is in a deep sleep. However, there is a catch to this entanglement-preservation recipe. Continuing the analogy, the princess can be killed in the act of being put to sleep, and she can also be killed during revival. So the method is risky. Moreover, a stronger procedure is required for a longer preservation, which decreases the probability of success. In the entanglement experiment presented by Kim et al.1, the initial state survives only if all four quantum measurements give ‘good’ results, and for better state preservation one should use stronger measurements with smaller probability of a ‘good’ result. The method is essentially based on preferential selection of the cases without decoherence events, and the selection is performed by the collapse– uncollapse sequence. It is therefore worth questioning whether this procedure for entanglement preservation is useful or not. The answer depends on the end purpose. If we average over all cases, including unsuccessful ones, then more quantum information is lost using the procedure than without it. However, if we do not need all the cases, for example, in entanglement distillation protocols, then the procedure can definitely be useful. Using another analogy, some people would surely prefer a small stack of good bricks for construction rather than a much bigger pile of broken bricks. It is quite possible that this method of entanglement preservation will eventually be useful in some quantum-communication tasks. There is even a theoretical hope9 that such selective procedures can be used in quantum computing, although the Sleeping Beauty method obviously cannot compete with the present mainstream idea for battling decoherence in quantum computing: quantum error correction. (Another powerful idea is based on decoherence-free subspaces). However, error-correction experiments are still at the stage of one-logic-qubit preservation, and in this sense the two-qubit experiment 1 is important. ENTANGLEMENT PRESERVATION The Sleeping Beauty approach Two-qubit entanglement can be preserved by partially measuring the qubits to leave them in a ‘lethargic’ state. The original state is restored using quantum measurement reversal after the qubits have travelled through a decoherence channel. Alexander N. Korotkov © F IN E A RT P H O TO G RA PH IC L IB RA RY /C O RB IS © 2012 Macmillan Publishers Limited. All rights reserved 108 NATURE PHYSICS | VOL 8 | FEBRUARY 2012 | www.nature.com/naturephysics news & views An elegant feature of the experiment is the demonstration of entanglement preservation even in the regime where decoherence would otherwise fully destroy it: so-called entanglement sudden death. On the other hand, the preservation of an arbitrary two-qubit state still has to be demonstrated using quantum process tomography. It should also be noted that amplitude damping is not the natural decoherence process for the photon-polarization qubits used in the experiment, and the authors had to use a clever trick to imitate this type of decoherence. Therefore it would be interesting to reproduce the experiment with superconducting qubits5,10, for which energy relaxation is often the main decoherence process. In my opinion, the most important aspect of the work is educational. It demonstrates that the weirdness of weak quantum measurement can be useful. So far, the field of beyond-orthodox quantum measurement is mainly for experts. However, real measurements are usually not instantaneous (as in the orthodox collapse), but continuous in time and therefore ‘weak’. It seems very natural that the basic idea of weak quantum measurement should be introduced at the undergraduate level. Moreover, it only requires the common-sense collapse postulate to be extended to the equally common- sense quantum Bayes’ rule. Any experiments demonstrating the usefulness of weak measurements (a good example is quantum feedback) help to gradually change the generally accepted quantum-measurement paradigm. The experiment presented by Kim et al. is a step in this direction. ❐ Alexander N. Korotkov is in the Department of Electrical Engineering, University of California, Riverside, California 92521, USA. e-mail: korotkov@ee.ucr.edu References 1. Kim, Y-S., Lee, J-C., Kwon, O. & Kim, Y-H. Nature Phys. 8, 117–120 (2012). 2. Korotkov, A. N. & Keane, K. Phys. Rev. A 81, 040103 (2010). 3. Katz, N. et al. Science 312, 1498–1500 (2006). 4. Korotkov, A. N. & Jordan A. N. Phys. Rev. Lett. 97, 166805 (2006). 5. Katz, N. et al. Phys. Rev. Lett. 101, 200401 (2008). 6. Merali, Z. Nature 454, 8–9 (2008). 7. Lee, J-C., Jeong, Y-C., Kim, Y-S. & Kim, Y-H. Opt. Express 19, 16309–16316 (2011). 8. Perrault, C. Contes de ma Mère l’Oye (Claude Barbin, Paris, 1697). 9. Knill, E., Laflamme, R. & Milburn, G. J. Nature 409, 46–52 (2001). 10. Schuster, D. I. et al. Nature 445, 515–518 (2007). It’s a familiar textbook example: a heat engine converts thermal energy into mechanical work, usually via a piston that compresses a macroscopic gas, for which temperature, volume and pressure are all well defined. But what happens when we scale an engine down to the point at which some thermodynamic quantities evade a precise definition — and others start to fluctuate? The answer, it seems, may be in reach. Writing in Nature Physics, Valentin Blickle and Clemens Bechinger report that they have succeeded in reproducing a Stirling engine on the microscale1. The standard Stirling cycle involves the isothermal expansion of a gas at high temperature, followed by a decrease in temperature at constant volume (isochoric cooling), an isothermal compression at low temperature, and finally, isochoric heating, which drives the gas back to its initial state. In Blickle and Bechinger’s microscopic engine, a single Brownian particle constitutes the working gas (Fig. 1); a possibility previously suggested in the context of the Carnot cycle2. The particle is confined in a quadratic potential by optical tweezers, and, by tuning the stiffness of the trap, the authors are able to manipulate the area over which the particle roams, effectively mimicking compression and expansion. The water surrounding the trapped particle is heated by means of a laser beam with a wavelength that matches an absorption peak of water, providing exquisite control over rapid temperature changes from 22 °C to 90 °C in less than 10 ms. In this way, Blickle and Bechinger are able to implement every step of the Stirling cycle. More importantly, the work involved in these steps can be measured with a precision on the order of the thermal energy, kBT ≈ 10–22 Joules (where kB is the Boltzmann constant and T = 295–363 K, the absolute temperature range of the experiment). With this degree of precision and control, this prototypical microscopic thermal engine opens a door to the design of new machines at the microscale. At that scale, thermal fluctuations are unavoidable. During a simple process, such as changing the stiffness of an optical trap, both work and dissipated heat become fluctuating quantities. The appropriate theoretical framework to deal with these fluctuations is stochastic thermodynamics, which has been developed in the past decade within the context of non- equilibrium statistical mechanics3,4. This powerful formalism provides the tools necessary to calculate work and heat along single, fluctuating, microscopic trajectories, and is capable of explaining some of the remarkable properties of their fluctuations, including the Jarzynski equality and the Crooks relation5, both of which remain exact even for systems arbitrarily far from equilibrium. Stochastic thermodynamics has also been used in the analysis of a number of experiments involving the manipulation of microscopic objects (such as Brownian particles or macromolecules), as well as in investigations of motor proteins operating within living cells. However, Blickle and Bechinger’s experiment is the first to incorporate thermal baths at different temperatures — the signature of a thermal engine. As with any typical thermal engine, this microengine extracts energy from the hot thermal bath, converts part of this energy into mechanical work, and dissipates the rest as heat into the colder bath. At THERMODYNAMICS A Stirling effort The realization of a single-particle Stirling engine pushes thermodynamics into stochastic territory where fluctuations dominate, and points towards a better understanding of energy transduction at the microscale. Jordan M. Horowitz and Juan M. R. Parrondo © 2012 Macmillan Publishers Limited. All rights reserved mailto:korotkov@ee.ucr.edu