Diamonds Shine a Light on Hidden Currents in Graphene

It sounds like pure sorcery: using diamonds to observe invisible power swirling and flowing through carefully crafted channels. But these diamonds are a reality. Prof. Ron Walsworth of the Joint Quantum Institute (JQI) and Quantum Technology Center (QTC), working with Postdoctoral Associate Mark Ku, Harvard's Amir Yacoby and Tony Zhou, and colleagues from several other institutions, have developed a way to use diamonds to see the elusive details of electrical currents.

The new technique gives researchers a map of the intricate movement of electricity in the microscopic world. The team demonstrated the potential of the technique by revealing the unusual electrical currents that flow in graphene, a layer of carbon just one atom thick. Graphene has exceptional electrical properties, and the technique could help researchers better understand graphene and other materials and find new uses for them.

In a paper published on July 22 in the journal Nature(link is external), the team describes how their diamond-based quantum sensors produce images of currents in graphene. Their results revealed, for the first time, details about how room-temperature graphene can produce electrical currents that flow more like water through pipes than electricity through ordinary wires.current map with contact umdA picture of an electrical current in graphene (marked by the red outline) showing a fluid-like flow imaged using a diamond-based quantum sensor. The grey portion is where the metal electrical contacts prevented collection of data. (Credit: Walsworth and Yacoby research groups, Harvard and University of Maryland)

“Understanding strongly interacting quantum systems, like the currents in our graphene experiment, is a central topic in condensed matter physics,” says Ku, the lead author of the paper. “In particular, collective behaviors of electrons resembling those of fluids with friction might provide a key to explaining some of the puzzling properties of high-temperature superconductors.”

It is no easy task to get a glimpse of current inside a material. After all, a wire alive with electricity looks identical to a dead wire. However, there is an invisible difference between a current-bearing wire and one carrying no electrical power: A moving charge always generates a magnetic field. But if you want to see the fine details of the current you need a correspondingly close look at the magnetic field, which is a challenge. If you apply to blunt a tool, like a magnetic compass, all the detail is washed away and you just measure the average behavior.

Walsworth, who is also the Director of the University of Maryland Quantum Technology Center, specializes in ultra-precise measurements of magnetic fields. His success lies in wielding diamonds, or more specifically quantum imperfections in man-made diamonds.

The Rough in the Diamond

“Diamonds are literally carbon molecules lined up in the most boring way,” said Michael, the immortal being in the NBC sitcom “The Good Place.” But the orderly alignment of carbon molecules isn’t always so boring and perfect.

Imperfections can make their home in diamonds and be stabilized by the surrounding, orderly structure. Walsworth and his team focus on imperfections called nitrogen vacancies, which trade two of the neighboring carbon atoms for a nitrogen atom and a vacancy.

“The nitrogen vacancy acts like an atom or an ion frozen into a lattice,” says Walsworth. “And the diamond doesn't have much of an effect besides conveniently holding it in place. A nitrogen vacancy in a diamond, much like an atom in free space, has quantum mechanical properties, like energy levels and spin, and it absorbs and emits light as individual photons.”

The nitrogen vacancies absorb green light, and then emit it as lower-energy red light; this phenomenon is similar to the fluorescence of the atoms in traffic cones that create the extra-bright orange color. The intensity of the red light that is emitted depends on the how the nitrogen vacancy holds energy, which is sensitive to the surrounding magnetic field.

So if researchers place a nitrogen vacancy near a magnetic source and shine green light on the diamond they can determine the magnetic field by analyzing the produced light. Since the relationship between currents and magnetic fields is well understood, the information they collect helps paint a detailed image of the current.

To get a look at the currents in graphene, the researchers used nitrogen vacancies in two ways.

The first method provides the most detailed view. Researchers run a tiny diamond containing a single nitrogen vacancy straight across a conducting channel. This process measures the magnetic field along a narrow line across a current and reveals changes in the current over distances of about 50 nanometers (the graphene channels they investigate were about 1,000 to 1,500 nanometers wide). But the method is time consuming, and it is challenging to keep the measurements aligned to form a complete image.

Their second approach produces a complete two-dimensional snapshot, like that shown in the image above, of a current at a particular instant. The graphene rests entirely on a diamond sheet that contains many nitrogen vacancies. This complementary method generates a fuzzier picture but allows them to see the entire current at once.

Not Your Ordinary Current

The researchers used these tools to investigate the flow of currents in graphene in a situation with particularly rich physics. Under the right conditions, graphene can have a current that is made not just out of electrons but out of an equal number of positively charged cousins—commonly called holes because they represent a missing electron.

In graphene, the two types of charges strongly interact and form what is known as a Dirac fluid. Researchers believe that understanding the effects of interactions on the behaviors of the Dirac fluid might reveal secrets of other materials with strong interactions, like high-temperature superconductors.  In particular, Walsworth and colleagues wanted to determine if the current in the Dirac fluid flows more like water and honey, or like an electrical current in copper.

In a fluid, the individual particles interact a lot—pushing and pulling on each other. These interactions are responsible for the formations of whirling vortices and the drag on things moving through a fluid. A fluid with these sorts of interactions is called viscous. Thicker fluids like honey or syrup that really drag on themselves are more viscous than thinner fluids like water.

But even water is viscous enough to flow unevenly in smooth pipes. The water slows down the closer you get to the edge of the pipe with the fastest current in the center of the pipe. This specific type of uneven flow is called viscous Poiseuille flow, named after Jean Léonard Marie Poiseuille, whose study of blood travelling through tiny blood vessels in frogs inspired him to investigate how fluids flow through small tubes.

In contrast, the electrons in a normal conductor, like the wires in computers and walls, don’t interact much. They are much more influenced by the environment within the conducting material—often impurities in the material in particular. On the individual scale, their motion is more like that of perfume wafting through the air than water rushing down a pipe. Each electron mostly does its own thing, bouncing from one impurity to the next like a perfume molecule bouncing between air molecules. So electrical currents tend to spread out and flow evenly, all the way up to the edges of the conductor.

But in certain materials, like graphene, researchers realized that electrical currents can behave more like fluids. It requires just the right conditions of strong interactions and few impurities to see the electrical equivalents of Poiseuille flow, vortices and other fluid behaviors.

“Not many materials are in this sweet spot,” says Ku. “Graphene turns out to be such a material. When you take most other conductors to very low temperature to reduce the electron’s interactions with impurities, either superconductivity kicks in or the interactions between electrons just aren’t strong enough.”

Mapping Graphene’s Currents

While previous research indicated that the electrons can flow viscously in graphene, they failed to do so for a Dirac fluid where the interactions between electrons and holes must be considered. Previously, researchers couldn’t get an image of a Dirac Fluid current to confirm details like if it was a Poiseuille flow. But the two new methods introduced by Walsworth, Ku and their colleagues produce images that revealed that the Dirac fluid current decreases toward the edges of the graphene, like it does for water in a pipe. They also observed the viscous behavior at room temperature; evidence from previous experiments for viscous electrical flow in graphene was restricted to colder temperatures.

The team believes this technique will find many uses, and Ku is interested in continuing this line of research and trying to observe new viscous behaviors using these techniques in his next position as an assistant professor of physics at the University of Delaware. In addition to providing insight into physics related to the Dirac fluid like high temperature superconductors, the technique may also reveal exotic currents in other materials and provide new insights into phenomena like the quantum spin Hall effect and topological superconductivity. And as researchers better understand new electronic behaviors of materials, they may be able to develop other practical applications as well, like new types of microelectronics.

“We know there are lots of technological applications for things that carry electrical currents,” says Walsworth. “And when you find a new physical phenomenon, eventually, people will probably figure out some way to use it in technologically. We want to think about that for the viscous current in graphene in the future.”

Original story by Bailey Bedford 

In addition to Walsworth, Ku, Yacoby and Zhou, Qing Li, a physics graduate student at the Massachusetts Institute of Technology; Young J. Shin, a scientist at Brookhaven National Lab; Jing K. Shi, a scientist at the Institute for Infocomm Research; Claire Burch, a former research intern; Laurel E. Anderson, a physics graduate student at Harvard; Andrew T. Pierce, a physics graduate student at Harvard; Yonglong Xie, a joint postdoctoral fellow at Harvard and the Massachusetts Institute of Technology; Assaf Hamo, a postdoctoral fellow at Harvard; Uri Vool, a postdoctoral fellow at Harvard; Huiliang Zhang, a staff engineer at PDF Solutions; Francesco Casola, a quantitative research associate at Capital Fund Management; Takashi Taniguchi, a researcher at the National Institute for Materials Science in Japan; Kenji Watanabe, a researcher at the National Institute for Materials Science in Japan; Michael M. Fogler, a professor of physics at UC San Diego; and Philip Kim, a professor of physics at Harvard, were also co-authors of the paper.

Research Contacts
Ronald Walsworth: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Mark Ku: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Media Contact
Bailey Bedford: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)

New Quantum Information Speed Limits Depend on the Task at Hand

Unlike speed limits on the highway, most speed limits in physics cannot be disobeyed. For example, no matter how little you care about getting a ticket, you can never go faster than the speed of light. Similarly stringent limits exist for information, too. The speed of light is still the ultimate speed limit, but depending on how information is stored and transmitted, there can be slower limits in practice.

The story gets particularly subtle when the information is quantum. Quantum information is represented by qubits (the quantum version of ordinary bits), which can be stored in photons, atoms or any number of other systems governed by the rules of quantum physics. Figuring out how fast information can move from one qubit to another is not only interesting from a fundamental point of view; it’s also important for more practical purposes, like improving the designs of quantum computers and learning what their limitations might be.

Now, a group of UMD researchers led by Adjunct Associate Professor Alexey Gorshkov—who is a Fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science and a physicist at the National Institute of Standards and Technology(link is external)—in collaboration with teams at the University of Colorado Boulder, Caltech, and the Colorado School of Mines, have found something surprising: the speed limit for quantum information can depend on the task at hand. They detail their results in a paper published July 13, 2020 in the journal (link is external)Physical Review X(link is external) and featured in Physics(link is external).quantum info speed limits galleryA new protocol for cutting and pasting quantum information first spreads the content of one quantum bit (blue dot) over a region (black circle). Then, it takes advantage of long-range interactions (blue streaks) to transfer the information. Finally, it collects the information at the target quantum bit (red dot). (Credit: Chi-Fang Chen/Caltech)

Just as knowing the speed of light doesn’t automatically let us build rockets that can travel that fast, knowledge of the speed at which quantum information can travel doesn’t tell us how it can be done. But figuring out what sets these speed limits did allow the team to come up with new information transfer methods that approach the theoretical speed limit closer than ever before.

“Figuring out the fastest way to move quantum information around will help us maximize the performance of future quantum computers,” says Minh Tran, a graduate student in physics at UMD and the lead author of the paper.

One procedure subject to these new limits is like a quantum cut and paste: moving the information stored in one qubit to a different one far away. It’s a crucial task that can become a bottleneck as quantum computers get larger and larger.  In quantum computers based on superconductors, like Google’s Sycamore(link is external), qubits only really talk to their next-door neighbors. Or, in physics-speak, their interactions are short-range. That means that once you cut a qubit, you’d have to go door to door, cutting and pasting it until you reach the target. The speed limit for this situation was found back in the 1970’s. It’s strict and consistent—it doesn’t ease up no matter how far the information travels.

Things get more complicated—and more realistic for a lot of quantum computing platforms—when you start to consider long-range interactions: qubits that talk not only to the ones directly next to them, but also to neighbors several doors down. Quantum computers built with trapped ions, polar molecules, and Rydberg atoms all have these long-range interactions.

Previous work has shown that in long-range interacting setups, there isn’t always a strict speed limit. Sometimes, the information can travel faster once it’s gone further away from its starting point, and other times its speed isn’t limited at all (except for the ultimate limit set by the speed of light). This depends on the dimensions of the quantum computer (if it’s a chain, a pancake, or a cube) as well as the strength of the long-range interaction (how loudly one qubit can talk to another many doors down).

Finding regimes where these long-range interactions relax the information speed limits carries the promise of making quantum processing much faster. Gorshkov, Tran and their collaborators looked more closely at the regime where the speed limit is not strict—where information is allowed to travel faster as it gets further away from its origin. What they found was surprising: for some applications, the speed limit was indeed loose as previously discovered. But for others, the speed limit was just as strict as in the nearest neighbor case.

This implies that for the same quantum computer the speed limits are different for different tasks. And even for the same task, such as quantum cut-and-paste, different rules can apply in different situations. If you cut-and-paste in the beginning of a computation, the speed limit is loose, and you can do it very quickly. But if you have to do it mid-computation, when the states of the qubits along the way aren’t known, a stricter speed limit applies.

“The existence of different speed limits is cool fundamentally because it shows a separation between tasks that seemed very similar,” says Abhinav Deshpande, a graduate student in physics at UMD and one of the authors of the new paper. 

So far, few experimental realizations of quantum computers have been able to take advantage of long-range interactions. Nevertheless, the state of the art is improving rapidly, and these theoretical findings may soon play a crucial role in designing quantum computing architectures and choosing protocols that optimize their efficiency. “Once you get systems that are larger and more coherent,” says Gorshkov, “down the road, these insights will be even more applicable.”

Original story by Dina Genkina

In addition to Tran, Deshpande and Gorshkov, authors on this paper included Chi-Fang Chen, a graduate student in physics at the California Institute of Technology; Adam Ehrenberg, a graduate student in physics at UMD; Andrew Guo, a graduate student in physics at UMD; Yifan Hong, a graduate student in physics at the University of Colorado Boulder; Zhexuan Gong, a former research scientist at JQI who is currently an assistant professor of physics at the Colorado School of Mines; and Andrew Lucas, an assistant professor of physics at the University of Colorado Boulder.

Research Contacts
Minh Tran  This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Alexey Gorshkov This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)

Quantum Gases Won’t Take the Heat

The quantum world blatantly defies intuitions that we’ve developed while living among relatively large things, like cars, pennies and dust motes. In the quantum world, tiny particles can maintain a special connection over any distance, pass through barriers and simultaneously travel down multiple paths.

A less widely known quantum behavior is dynamical localization, a phenomenon in which a quantum object stays at the same temperature despite a steady supply of energy—bucking the assumption that a cold object will always steal heat from a warmer object.

This assumption is one of the cornerstones of thermodynamics—the study of how heat moves around. The fact that dynamical localization defies this principle means that something unusual is happening in the quantum world—and that dynamical localization may be an excellent probe of where the quantum domain ends and traditional physics begins. Understanding how quantum systems maintain, or fail to maintain, quantum behavior is essential not only to our understanding of the universe but also to the practical development of quantum technologies.

“At some point, the quantum description of the world has to changeover to the classical description that we see, and it's believed that the way this happens is through interactions,” says UMD postdoctoral researcher Colin Rylands of the Joint Quantum Institute.UCSB labEquipment at the University of California, Santa Barbra for creating and manipulating quantum gases. It is being used to investigate the dynamical localization of interacting atoms, which is related to new work by JQI researchers. (Credit: Tony Mastres, UCSB)

Until now, dynamical localization has only been observed for single quantum objects, which has prevented it from contributing to attempts to pin down where the changeover occurs. To explore this issue, Rylands, together with Prof. Victor Galitski and other colleagues, investigated mathematical models to see if dynamical localization can still arise when many quantum particles interact. To reveal the physics, they had to craft models to account for various temperatures, interaction strengths and lengths of times. The team’s results, published in Physical Review Letters, suggest that dynamical localization can occur even when strong interactions are part of the picture.

“This result is an example of where a single quantum particle behaves completely differently from a classical particle, and then even with the addition of strong interactions the behavior still resembles that of the quantum particle rather than the classical,” says Rylands, who is the first author of the article.

A Quantum Merry-Go-Round

The result extends dynamical localization beyond its single-particle origins, into the regime of many interacting particles. But in order to visualize the effect, it’s still useful to start with a single particle. Often, that single particle is discussed in terms of a rotor, which you can picture as a playground merry-go-round (or anything else that spins in a circle). The energy of a rotor (and its temperature) is directly related to how fast it is spinning. And a rotor with a steady supply of energy—one that is given a regular “kick”—is a convenient way of visualizing the differences in the flow of energy in quantum and classical physics.

For example, imagine Hercules tirelessly swiping at a merry-go-round. Most of his swipes will speed it up, but occasionally a swipe will land poorly and slow it down. Under these (imaginary) conditions, a normal merry-go-round would spin faster and faster, building up more and more energy until vibrations finally shake the whole thing apart. This represents how a normal rotor, in theory, can heat up forever without hitting an energy limit.

In the quantum world, things go down differently. For a quantum merry-go-round each swipe doesn’t simply increase or decrease the speed. Instead, each swipe produces a quantum superposition over different speeds, representing the chance of finding the rotor spinning at different rates. It’s not until you make a measurement that a particular speed emerges from the quantum superposition caused by the preceding kicks.

Previous research, both theoretical and experimental, has shown that at first a quantum rotor doesn’t behave very differently from a normal rotor because of this distinction—on average a quantum merry-go-round will also have more energy after experiencing more kicks. But once a quantum rotor has been kicked enough, its speed tends to plateau. After a certain point, the persistent effort of our quantum Hercules fails to increase the quantum merry-go-round’s energy (on average).

This behavior is conceptually similar to another thermodynamics-defying quantum phenomenon called Anderson localization. Philip Anderson, one of the founders of condensed-matter physics, earned a Noble Prize for the discovery of the phenomenon. He and his colleagues explained how a quantum particle, like an electron, could become trapped despite many apparent opportunities to move. They explained that imperfections in the arrangement of atoms in a solid can lead to quantum interference among the paths available to a quantum particle, changing the likelihood of it taking each path. In Anderson localization, the chance of being on any path becomes almost zero, leaving the particle trapped in place.

Dynamical localization looks a lot like Anderson localization but instead of getting trapped at a particular position, a particle’s energy gets stuck. As a quantum object, a rotor’s energy and thus speed are restricted to a set of quantized values. These values form an abstract grid or lattice similar to the locations of atoms in a solid and can produce an interference among energy states similar to the interference among paths in physical space. The probabilities of the different possible energies, instead of the possible paths of a particle, interfere, and the energy and speed get stuck near a single value, despite ongoing kicks.

Exploring a New Quantum Playground

While Anderson localization provided researchers with a perspective to understand a single kicked quantum rotor, it left some ambiguity about what happens to many interacting rotors that can toss energy back and forth. A common expectation was that the extra interactions would allow normal heating by disrupting the quantum balance that limits the increase of energy.

Galitski and colleagues identified a one-dimensional system where they thought the expectation may not hold true. They chose an interacting one-dimensional Bose gas as their playground. In a Bose gas, particles zipping back and forth down a line play the part of the rotors spinning in place. The gas atoms follow the same basic principles as kicked rotors but are more practical to work with in a lab. In labs, lasers can be used to contain the gas and also to cool the atoms in the gas down to a low temperature, which is essential to ensuring a strong quantum behavior.

Once the team selected this playground, they explored mathematical models of the many interacting gas atoms. Exploring the gas at a variety of temperatures, interaction strengths and number of kicks required the team to switch between several different mathematical techniques to get a full picture. In the end their results combined to suggest that when a gas with strong interactions starts near zero temperature it can experience dynamical localization. The team named this phenomenon “many-body dynamical localization.”

"These results have important implications and fundamentally demonstrate our incomplete understanding of these systems," says Robert Konik, a coauthor of the paper and physicist at Brookhaven National Lab. "They also contain the seed of possible applications because systems that do not accept energy should be less sensitive to quantum decoherence effects and so might be useful for making quantum computers."

Experimental Support

Of course, a theoretical explanation is only half the puzzle; experimental confirmation is essential to knowing if a theory is on solid ground. Fortunately, an experiment on the opposite coast of the U.S. has been pursuing the same topic. Conversations with Galitski inspired David Weld, an associate physics professor at the University of California, Santa Barbra, to use his team’s experimental expertise to probe many-body dynamical localization.

“Usually it's not easy to convince an experimentalist to do an experiment based on theory,” says Galitski. “This case was kind of serendipitous, that David already had almost everything ready to go.”

Weld’s team is using a quantum gas of lithium atoms that is confined by lasers to create an experiment similar to the theoretical model Galitski’s team developed. (The main difference is that in the experiment the atoms move in three dimensions instead of just one.)

In the experiment, Weld and his team kick the atoms hundreds of times using laser pulses and repeatedly observe their fate. For different runs of the experiment they tuned the interaction strength of the atoms to different values.

“It's nice because we can go to a noninteracting regime quite perfectly, and that's something that it's pretty easy to calculate the behavior of,” says Weld. “And then we can continuously turn up the interaction and move into a regime that's more like what Victor and his coworkers are talking about in this latest paper. And we do observe localization, even in the presence of the strongest interactions that we can add to the system. That's been a surprise to me.”

Their preliminary results confirm the prediction that many-body dynamical localization can occur even when strong interactions are part of the picture. This opens new opportunities for researchers to try to pin down the boundary between the quantum and classical world.

“It's nice to be able to show something that people didn't expect and also for it to be experimentally relevant,” says Rylands.

Story by Bailey Bedford

In addition to Rylands, Galitski and Konik, former JQI graduate student Efim Rozenbaum, who is now a consultant with Boston Consulting Group, was also a co-author of the paper.

Research Contact: Colin Rylands This email address is being protected from spambots. You need JavaScript enabled to view it.
Media Contact: Bailey Bedford This email address is being protected from spambots. You need JavaScript enabled to view it.

Peeking into a World of Spin-3/2 Materials

Researchers have been pushing the frontiers of the quantum world for over a century. And time after time, spin has been a rich source of new physics.

Spin, like mass and electrical charge, is an intrinsic property of quantum particles. It is central to understanding how quantum objects will respond to a magnetic field, and it divides all quantum objects into two types. The half-integer ones, like the spin-1/2 electron, refuse to share the same quantum state, whereas the integer ones, like the spin-1 photon, don’t have a problem cozying up together. So, spin is essential when delving into virtually any topic governed by quantum mechanics, from the Higgs Boson to superconductors.  In a material, the momentum and energy of an electron are tied together by a “dispersion relation” (pictured above). This relationship influences the electrons’ behavior, sometimes making them behave as particles with different quantum properties. (Credit: Igor Boettcher/University of Maryland)In a material, the momentum and energy of an electron are tied together by a “dispersion relation” (pictured above). This relationship influences the electrons’ behavior, sometimes making them behave as particles with different quantum properties. (Credit: Igor Boettcher/University of Maryland)

Yet after almost a century of playing a central role in quantum research, questions about spin remain. For example, why do all the elementary particles that we know about only have spin values of 0, 1/2, or 1? And what new behaviors might exist for particles with spin values greater than 1?

The first question may remain a cosmic mystery, but there are opportunities to explore the second. Inside of a material, a particle’s surroundings can cause it to behave like it has a new spin value. In the past couple years, researchers have discovered materials in which electrons behave like their spin has been bumped up, from 1/2 to 3/2. UMD postdoctoral researcher Igor Boettcher of the Joint Quantum Institute explored the new behaviors these spins might produce in a recent paper featured on the cover of Physical Review Letters.

Instead of looking at a particular material, Boettcher focused on the math that describes interactions between spin-3/2 electrons at low temperatures. These electrons can interact in more ways than their mundane spin-1/2 counterparts, which unlocks new phases—or collective behaviors—that researchers can look for in experiments. Boettcher sifted through the possible phases, searching for the ones that are likely to be stable at low temperatures. He looked at which phases tie up the least energy in the interactions, since as the temperature drops a material becomes most stable in the form containing the least energy (like steam condensing into liquid water and eventually freezing into ice).

He found three promising phases to hunt for in experiments. Which of these phases, if any, arise in a particular material will depend on its unique properties. Still, Boettcher’s predictions provide researchers with signals to keep an eye out for during experiments. If one of the phases forms, he predicts that common measurement techniques will reveal a signature shift in the electrical properties.

Boettcher’s work is an early step in the exploration of spin-3/2 materials. He hopes that one day the field might be comparable to that of graphene, with researchers constantly racing to explore new physics, produce better quality materials, and identify new transport properties.

“I really hope that this will develop into a big field, which will require both experimentalists and the theorists to do their part so that we can really learn something about the spin-3/2 particles and how they interact.” says Boettcher. “This is really just the beginning right now, because these materials just popped up.”

Story by Bailey Bedford

Research Contact: Igor Boettcher  This email address is being protected from spambots. You need JavaScript enabled to view it.
Media Contact: Bailey Bedford This email address is being protected from spambots. You need JavaScript enabled to view it.

New Protocol Helps Classify Topological Matter

Topological materials have captured the interest of many scientists and may provide the basis for a new era in materials development. On April 10, 2020 in the journal Science Advances, physicists working with Andreas Elben, Jinlong Yu, Peter Zoller and Benoit Vermersch, including Associate Professor Mohammad Hafezi and former Joint Quantum Institute (JQI) postdoctoral researcher Guanyu Zhu (currently a research staff member at IBM T. J. Watson Research Center), presented a new method for identifying and characterizing topological invariants on various experimental platforms, testing their protocol in a quantum simulator made of neutral atoms.

Quantum simulators are an emerging tool for preparing and investigating complex quantum states. They can be realized in a variety of different physical systems—such as ultracold atoms in optical lattices, Rydberg atoms, trapped ions or superconducting quantum bits—and they promise to enhanceTopological phases of matter are a particularly fascinating class of quantum states. (Credit: Harald Ritsch/IQOQI Innsbruck)Topological phases of matter are a particularly fascinating class of quantum states. (Credit: Harald Ritsch/IQOQI Innsbruck) the study of exotic states of matter.

In particular, this new breed of simulator may be able to prepare topological states of matter, which researchers find particularly fascinating. In 2016, David Thouless, Duncan Haldane and Michael Kosterlitz were awarded the Nobel Prize in Physics for their theoretical discoveries about topological states. Scientists now know that these states of matter are characterized by nonlocal quantum correlations, making them particularly robust against local distortions that inevitably occur in experiments.

But it’s often hard to know if a material sample in the lab is in a topological phase. "Identifying and characterizing such topological phases in experiments is a great challenge," say Vermersch, Yu and Elben from the Center for Quantum Physics at the University of Innsbruck and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences. "Topological phases cannot be identified by local measurements because of their special properties. We are therefore developing new measurement protocols that will enable experimental physicists to characterize these states in the laboratory".

In recent years this identification has been achieved for systems without any interactions. However, for interacting systems, which in the future could also be used as topological quantum computers, this has not been possible so far.

In the new work, the research team proposed and experimentally tested protocols that might enable other experimenters to measure topological invariants. These mathematical expressions distinguish different topological phases, making it possible to classify interacting topological states in a wide variety of systems.

"The idea of our method is to first prepare such a topological state in a quantum simulator,” explains Elben. “Now so-called random measurements are performed, and topological invariants are extracted from statistical correlations of these random measurements.”

The key to the method is that although the topological invariants are highly complex, non-local correlation functions, they can still be extracted from statistical correlations of simple and local random measurements. “The many-body invariants characterizing different types of topological orders are path-integrals in topological quantum field theory, corresponding to various types of space-time manifolds, such as the real-projective plane,” says Zhu. “It is kind of a miracle that we eventually realized that these highly abstract quantities in theory can actually be measured in relatively simple experiments.”

And as some members of the research group have recently shown, such random measurements are possible in experiments today. "Our protocols for measuring the topological invariants can therefore be directly applied in the existing experimental platforms," says Vermersch.

In addition to Elben, Yu, Zoller, Vermersch, Zhu and Hafezi, the co-authors included Frank Pollmann from the Technical University of Munich. The research was financially supported by the European Research Council and the EU flagship for quantum technologies, as well as the Army Research Office MURI program and the NSF Physics Frontier Center at JQI.

This story was originally published by the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences in Innsbruck. It was adapted with permission by the JQI:

Research Contact: Mohammad Hafezi, This email address is being protected from spambots. You need JavaScript enabled to view it.