Quantum Simulation Stars Light in the Role of Sound

Inside a material, such as an insulator, semiconductor or superconductor, a complex drama unfolds that determines the physical properties. Physicists work to observe these scenes and recreate the script that the actors—electrons, atoms and other particles—play out. It is no surprise that electrons are most frequently the stars in the stories behind electrical properties. But there is an important supporting actor that usually doesn’t get a fair share of the limelight.

This underrecognized actor in the electronic theater is sound, or more specifically the quantum mechanical excitations that carry sound and heat. Scientists treat these quantized vibrations as quantum mechanical particles called phonons(link is external). Similar to how photons are quantum particles of light, phonons are quantum particles of sound and other vibrations in a solid. Phonons are always pushing and pulling on electrons, atoms or molecules and producing new interactions between them.

The role that phonons play in the drama can be tricky for researchers to suss out. And sometimes when physicists identify an interesting story to study, they can’t easily find a material with all the requisite properties or of sufficient chemical purity.lCigar shaped clouds of atoms (pink) are levitated in a chamber where an experiment uses light to recreate behavior that normally is mediated by quantum particles of sound. (Credit: Yudan Guo, Stanford)Cigar shaped clouds of atoms (pink) are levitated in a chamber where an experiment uses light to recreate behavior that normally is mediated by quantum particles of sound. (Credit: Yudan Guo, Stanford)

To help overcome the challenges of working directly with phonons in physical materials, Professor Victor Galitski,  Joint Quantum Institute (JQI) postdoctoral researcher Colin Rylands and their colleagues have cast photons in the role of phonons in a classic story of phonon-driven physics. In a paper published recently in Physical Review Letters(link is external), the team proposes an experiment to demonstrate photons adequacy as an understudy and describes the setup to make the show work.

“The key idea came from an interdisciplinary collaboration that led to the realization that seemingly unrelated electron-phonon systems and neutral atoms coupled to light may share the exact same mathematical description,” says Galitski. “This new approach promises to deliver a treasure trove of exciting phenomena that can be transplanted from material physics to light-matter cavity systems and vice versa."

The Stage

Galitski and colleagues propose using a very carefully designed mirrored chamber—like coauthor Benjamin Lev has in his lab at Stanford University—as the stage where photons can take on the role of phonons. This type of chamber, called an optical cavity, is designed to hold light for a long time by bouncing it between the highly-reflective walls.

“We made cavities where if you stick your head in there—of course it's only a centimeter wide—you would see yourself 50,000 times,” says Lev. “Our mirrors are very highly polished and so the reflections don't rapidly decay away and get lost.”

In an optical cavity, the bouncing light can hold a cloud of atoms in a pattern that mimics the lattice of atoms in a solid. But if a cavity is too simple and can only contain a single light pattern—a mode—the lattice is frozen in place. The light has to be able to take on many modes to simulate the natural distortions of phonons in the material.

To create more dynamic stories with phonons, the team suggests using multimode confocal cavities. “Multimode confocal” basically means the chamber is shaped with unforgiving precision so that it can contain many distinct spatial distributions of light.

“If it were just a normal single-mode cavity—two curved mirrors spaced at some arbitrary distance from one another— it would only be a Gaussian shape that could bounce back and forth and would be kind of boring; your face would be really distorted,” says Lev. “But if you stick your face in our cavities, your face wouldn’t look too different—it looks a little different, but not too different. You can support most of the different shapes of the waveform of your face, and that will bounce back and forth.”

Mirrors with a green tint can be seen inside a small experimental cavity.Mirrors with a green tint can be seen inside a small experimental cavity.

View of the cavity mirrors that serve as a stage for quantum simulations where light takes on the role of sound. (Credit: LevLab, Stanford)

The variety of light distributions that these special cavities can harbor, along with the fact that the photons can interact with one atom and then get reflected back to a different atom, allows the researchers to create many different interactions in order to cast the light as phonons.

“It's that profile of light in the cavity which is playing the role of the phonons,” says Jonathan Keeling, a coauthor of the paper and a physicist at the University of St Andrews. “So the equivalent of the lattice distorting in one place is that this light is more intense in this place and less intense in another place.”

The Script

In the paper, the team proposes the first experiment to perform with these multimode confocal cavities—the first play to premier on the new stage. It’s a classic of condensed matter physics: the Peierls transition. The transition occurs in one-dimensional chains of particles with an attractive force between them. The attractive force leads the particles to pair up so that they form a density wave—two close particles and a space followed by two close particles and a space, on and on. Even a tiny pull between particles creates an instability that pulls them into pairs instead of distributing randomly.

In certain materials, the attractive pull from phonons is known to trigger a dramatic electrical effect through the Peierls transition. The creation of the density wave makes it harder for electrons to move through a material—resulting in a sudden transition from conductor to insulator.

“The Peierls transition is mathematically very similar to, but less well known than, superconductivity,” says Rylands. “And, like in superconducting systems, or many other systems that you would study in a solid-state lab, all these different phases of matter are driven by the interactions between phonons and the electrons.”

To recast the phonons as light the team has to also recast the electrons in the 1D material as cigar shaped clouds of atoms levitated in the chamber, as shown in the image above. But in this case, they don’t have a sudden cut off of an electrical current to conveniently signal that the transition occurred like in the traditional experiments with solids. Instead, they predicted what the light exiting the cavity should look like if the transition occurs.

Opening Night

The authors say that the proposed experiment will debut in cavities in Lev’s lab.

“It’s kind of nice when you have an experimentalist on a theory paper—you can hold the theorists’ feet to the fire and say, ‘Well, you know, that sounds great, but you can't actually do that,’” says Lev. “And here we made sure that you can do everything in there. So really, it's a quite literal roadmap to what we're going to do in the near future.”

If this experiment lines up with their predictions, it will give them confidence in the system’s ability to reveal new physics through other simulations.

There is competition for which shows will get the chance to take central stage, since there are many different scripts of physical situations that researchers can use the cavities to explore. The technique has promise for creating new phases of matter, investigating dynamic quantum mechanical situations, and understanding materials better. The cavities as stages for quantum simulations put researchers in the director’s chair for many quantum shows.

Original story by Bailey Bedford: https://jqi.umd.edu/news/quantum-simulation-stars-light-role-sound 

A note from the researchers: The JQI and Stanford collaborators are especially grateful for support from the US Army Research Office, and discussions with Dr. Paul Baker at US-ARO, that made this work possible.

In addition to Galitski, Rylands, Lev and Keeling, Stanford graduate student Yudan Guo was also a co-author of the paper.

Research Contact
Colin Rylands: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)

Media Contact
Bailey Bedford: This email address is being protected from spambots. You need JavaScript enabled to view it.

(link sends e-mail)

Diamonds Shine a Light on Hidden Currents in Graphene

It sounds like pure sorcery: using diamonds to observe invisible power swirling and flowing through carefully crafted channels. But these diamonds are a reality. Prof. Ron Walsworth of the Joint Quantum Institute (JQI) and Quantum Technology Center (QTC), working with Postdoctoral Associate Mark Ku, Harvard's Amir Yacoby and Tony Zhou, and colleagues from several other institutions, have developed a way to use diamonds to see the elusive details of electrical currents.

The new technique gives researchers a map of the intricate movement of electricity in the microscopic world. The team demonstrated the potential of the technique by revealing the unusual electrical currents that flow in graphene, a layer of carbon just one atom thick. Graphene has exceptional electrical properties, and the technique could help researchers better understand graphene and other materials and find new uses for them.

In a paper published on July 22 in the journal Nature(link is external), the team describes how their diamond-based quantum sensors produce images of currents in graphene. Their results revealed, for the first time, details about how room-temperature graphene can produce electrical currents that flow more like water through pipes than electricity through ordinary wires.current map with contact umdA picture of an electrical current in graphene (marked by the red outline) showing a fluid-like flow imaged using a diamond-based quantum sensor. The grey portion is where the metal electrical contacts prevented collection of data. (Credit: Walsworth and Yacoby research groups, Harvard and University of Maryland)

“Understanding strongly interacting quantum systems, like the currents in our graphene experiment, is a central topic in condensed matter physics,” says Ku, the lead author of the paper. “In particular, collective behaviors of electrons resembling those of fluids with friction might provide a key to explaining some of the puzzling properties of high-temperature superconductors.”

It is no easy task to get a glimpse of current inside a material. After all, a wire alive with electricity looks identical to a dead wire. However, there is an invisible difference between a current-bearing wire and one carrying no electrical power: A moving charge always generates a magnetic field. But if you want to see the fine details of the current you need a correspondingly close look at the magnetic field, which is a challenge. If you apply to blunt a tool, like a magnetic compass, all the detail is washed away and you just measure the average behavior.

Walsworth, who is also the Director of the University of Maryland Quantum Technology Center, specializes in ultra-precise measurements of magnetic fields. His success lies in wielding diamonds, or more specifically quantum imperfections in man-made diamonds.

The Rough in the Diamond

“Diamonds are literally carbon molecules lined up in the most boring way,” said Michael, the immortal being in the NBC sitcom “The Good Place.” But the orderly alignment of carbon molecules isn’t always so boring and perfect.

Imperfections can make their home in diamonds and be stabilized by the surrounding, orderly structure. Walsworth and his team focus on imperfections called nitrogen vacancies, which trade two of the neighboring carbon atoms for a nitrogen atom and a vacancy.

“The nitrogen vacancy acts like an atom or an ion frozen into a lattice,” says Walsworth. “And the diamond doesn't have much of an effect besides conveniently holding it in place. A nitrogen vacancy in a diamond, much like an atom in free space, has quantum mechanical properties, like energy levels and spin, and it absorbs and emits light as individual photons.”

The nitrogen vacancies absorb green light, and then emit it as lower-energy red light; this phenomenon is similar to the fluorescence of the atoms in traffic cones that create the extra-bright orange color. The intensity of the red light that is emitted depends on the how the nitrogen vacancy holds energy, which is sensitive to the surrounding magnetic field.

So if researchers place a nitrogen vacancy near a magnetic source and shine green light on the diamond they can determine the magnetic field by analyzing the produced light. Since the relationship between currents and magnetic fields is well understood, the information they collect helps paint a detailed image of the current.

To get a look at the currents in graphene, the researchers used nitrogen vacancies in two ways.

The first method provides the most detailed view. Researchers run a tiny diamond containing a single nitrogen vacancy straight across a conducting channel. This process measures the magnetic field along a narrow line across a current and reveals changes in the current over distances of about 50 nanometers (the graphene channels they investigate were about 1,000 to 1,500 nanometers wide). But the method is time consuming, and it is challenging to keep the measurements aligned to form a complete image.

Their second approach produces a complete two-dimensional snapshot, like that shown in the image above, of a current at a particular instant. The graphene rests entirely on a diamond sheet that contains many nitrogen vacancies. This complementary method generates a fuzzier picture but allows them to see the entire current at once.

Not Your Ordinary Current

The researchers used these tools to investigate the flow of currents in graphene in a situation with particularly rich physics. Under the right conditions, graphene can have a current that is made not just out of electrons but out of an equal number of positively charged cousins—commonly called holes because they represent a missing electron.

In graphene, the two types of charges strongly interact and form what is known as a Dirac fluid. Researchers believe that understanding the effects of interactions on the behaviors of the Dirac fluid might reveal secrets of other materials with strong interactions, like high-temperature superconductors.  In particular, Walsworth and colleagues wanted to determine if the current in the Dirac fluid flows more like water and honey, or like an electrical current in copper.

In a fluid, the individual particles interact a lot—pushing and pulling on each other. These interactions are responsible for the formations of whirling vortices and the drag on things moving through a fluid. A fluid with these sorts of interactions is called viscous. Thicker fluids like honey or syrup that really drag on themselves are more viscous than thinner fluids like water.

But even water is viscous enough to flow unevenly in smooth pipes. The water slows down the closer you get to the edge of the pipe with the fastest current in the center of the pipe. This specific type of uneven flow is called viscous Poiseuille flow, named after Jean Léonard Marie Poiseuille, whose study of blood travelling through tiny blood vessels in frogs inspired him to investigate how fluids flow through small tubes.

In contrast, the electrons in a normal conductor, like the wires in computers and walls, don’t interact much. They are much more influenced by the environment within the conducting material—often impurities in the material in particular. On the individual scale, their motion is more like that of perfume wafting through the air than water rushing down a pipe. Each electron mostly does its own thing, bouncing from one impurity to the next like a perfume molecule bouncing between air molecules. So electrical currents tend to spread out and flow evenly, all the way up to the edges of the conductor.

But in certain materials, like graphene, researchers realized that electrical currents can behave more like fluids. It requires just the right conditions of strong interactions and few impurities to see the electrical equivalents of Poiseuille flow, vortices and other fluid behaviors.

“Not many materials are in this sweet spot,” says Ku. “Graphene turns out to be such a material. When you take most other conductors to very low temperature to reduce the electron’s interactions with impurities, either superconductivity kicks in or the interactions between electrons just aren’t strong enough.”

Mapping Graphene’s Currents

While previous research indicated that the electrons can flow viscously in graphene, they failed to do so for a Dirac fluid where the interactions between electrons and holes must be considered. Previously, researchers couldn’t get an image of a Dirac Fluid current to confirm details like if it was a Poiseuille flow. But the two new methods introduced by Walsworth, Ku and their colleagues produce images that revealed that the Dirac fluid current decreases toward the edges of the graphene, like it does for water in a pipe. They also observed the viscous behavior at room temperature; evidence from previous experiments for viscous electrical flow in graphene was restricted to colder temperatures.

The team believes this technique will find many uses, and Ku is interested in continuing this line of research and trying to observe new viscous behaviors using these techniques in his next position as an assistant professor of physics at the University of Delaware. In addition to providing insight into physics related to the Dirac fluid like high temperature superconductors, the technique may also reveal exotic currents in other materials and provide new insights into phenomena like the quantum spin Hall effect and topological superconductivity. And as researchers better understand new electronic behaviors of materials, they may be able to develop other practical applications as well, like new types of microelectronics.

“We know there are lots of technological applications for things that carry electrical currents,” says Walsworth. “And when you find a new physical phenomenon, eventually, people will probably figure out some way to use it in technologically. We want to think about that for the viscous current in graphene in the future.”

Original story by Bailey Bedfordhttps://jqi.umd.edu/news/diamonds-shine-light-on-hidden-currents-graphene 

In addition to Walsworth, Ku, Yacoby and Zhou, Qing Li, a physics graduate student at the Massachusetts Institute of Technology; Young J. Shin, a scientist at Brookhaven National Lab; Jing K. Shi, a scientist at the Institute for Infocomm Research; Claire Burch, a former research intern; Laurel E. Anderson, a physics graduate student at Harvard; Andrew T. Pierce, a physics graduate student at Harvard; Yonglong Xie, a joint postdoctoral fellow at Harvard and the Massachusetts Institute of Technology; Assaf Hamo, a postdoctoral fellow at Harvard; Uri Vool, a postdoctoral fellow at Harvard; Huiliang Zhang, a staff engineer at PDF Solutions; Francesco Casola, a quantitative research associate at Capital Fund Management; Takashi Taniguchi, a researcher at the National Institute for Materials Science in Japan; Kenji Watanabe, a researcher at the National Institute for Materials Science in Japan; Michael M. Fogler, a professor of physics at UC San Diego; and Philip Kim, a professor of physics at Harvard, were also co-authors of the paper.

Research Contacts
Ronald Walsworth: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Mark Ku: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Media Contact
Bailey Bedford: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)

New Quantum Information Speed Limits Depend on the Task at Hand

Unlike speed limits on the highway, most speed limits in physics cannot be disobeyed. For example, no matter how little you care about getting a ticket, you can never go faster than the speed of light. Similarly stringent limits exist for information, too. The speed of light is still the ultimate speed limit, but depending on how information is stored and transmitted, there can be slower limits in practice.

The story gets particularly subtle when the information is quantum. Quantum information is represented by qubits (the quantum version of ordinary bits), which can be stored in photons, atoms or any number of other systems governed by the rules of quantum physics. Figuring out how fast information can move from one qubit to another is not only interesting from a fundamental point of view; it’s also important for more practical purposes, like improving the designs of quantum computers and learning what their limitations might be.

Now, a group of UMD researchers led by Adjunct Associate Professor Alexey Gorshkov—who is a Fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science and a physicist at the National Institute of Standards and Technology(link is external)—in collaboration with teams at the University of Colorado Boulder, Caltech, and the Colorado School of Mines, have found something surprising: the speed limit for quantum information can depend on the task at hand. They detail their results in a paper published July 13, 2020 in the journal (link is external)Physical Review X(link is external) and featured in Physics(link is external).quantum info speed limits galleryA new protocol for cutting and pasting quantum information first spreads the content of one quantum bit (blue dot) over a region (black circle). Then, it takes advantage of long-range interactions (blue streaks) to transfer the information. Finally, it collects the information at the target quantum bit (red dot). (Credit: Chi-Fang Chen/Caltech)

Just as knowing the speed of light doesn’t automatically let us build rockets that can travel that fast, knowledge of the speed at which quantum information can travel doesn’t tell us how it can be done. But figuring out what sets these speed limits did allow the team to come up with new information transfer methods that approach the theoretical speed limit closer than ever before.

“Figuring out the fastest way to move quantum information around will help us maximize the performance of future quantum computers,” says Minh Tran, a graduate student in physics at UMD and the lead author of the paper.

One procedure subject to these new limits is like a quantum cut and paste: moving the information stored in one qubit to a different one far away. It’s a crucial task that can become a bottleneck as quantum computers get larger and larger.  In quantum computers based on superconductors, like Google’s Sycamore(link is external), qubits only really talk to their next-door neighbors. Or, in physics-speak, their interactions are short-range. That means that once you cut a qubit, you’d have to go door to door, cutting and pasting it until you reach the target. The speed limit for this situation was found back in the 1970’s. It’s strict and consistent—it doesn’t ease up no matter how far the information travels.

Things get more complicated—and more realistic for a lot of quantum computing platforms—when you start to consider long-range interactions: qubits that talk not only to the ones directly next to them, but also to neighbors several doors down. Quantum computers built with trapped ions, polar molecules, and Rydberg atoms all have these long-range interactions.

Previous work has shown that in long-range interacting setups, there isn’t always a strict speed limit. Sometimes, the information can travel faster once it’s gone further away from its starting point, and other times its speed isn’t limited at all (except for the ultimate limit set by the speed of light). This depends on the dimensions of the quantum computer (if it’s a chain, a pancake, or a cube) as well as the strength of the long-range interaction (how loudly one qubit can talk to another many doors down).

Finding regimes where these long-range interactions relax the information speed limits carries the promise of making quantum processing much faster. Gorshkov, Tran and their collaborators looked more closely at the regime where the speed limit is not strict—where information is allowed to travel faster as it gets further away from its origin. What they found was surprising: for some applications, the speed limit was indeed loose as previously discovered. But for others, the speed limit was just as strict as in the nearest neighbor case.

This implies that for the same quantum computer the speed limits are different for different tasks. And even for the same task, such as quantum cut-and-paste, different rules can apply in different situations. If you cut-and-paste in the beginning of a computation, the speed limit is loose, and you can do it very quickly. But if you have to do it mid-computation, when the states of the qubits along the way aren’t known, a stricter speed limit applies.

“The existence of different speed limits is cool fundamentally because it shows a separation between tasks that seemed very similar,” says Abhinav Deshpande, a graduate student in physics at UMD and one of the authors of the new paper. 

So far, few experimental realizations of quantum computers have been able to take advantage of long-range interactions. Nevertheless, the state of the art is improving rapidly, and these theoretical findings may soon play a crucial role in designing quantum computing architectures and choosing protocols that optimize their efficiency. “Once you get systems that are larger and more coherent,” says Gorshkov, “down the road, these insights will be even more applicable.”

Original story by Dina Genkina

In addition to Tran, Deshpande and Gorshkov, authors on this paper included Chi-Fang Chen, a graduate student in physics at the California Institute of Technology; Adam Ehrenberg, a graduate student in physics at UMD; Andrew Guo, a graduate student in physics at UMD; Yifan Hong, a graduate student in physics at the University of Colorado Boulder; Zhexuan Gong, a former research scientist at JQI who is currently an assistant professor of physics at the Colorado School of Mines; and Andrew Lucas, an assistant professor of physics at the University of Colorado Boulder.

Research Contacts
Minh Tran  This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
Alexey Gorshkov This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
|

Quantum Gases Won’t Take the Heat

The quantum world blatantly defies intuitions that we’ve developed while living among relatively large things, like cars, pennies and dust motes. In the quantum world, tiny particles can maintain a special connection over any distance, pass through barriers and simultaneously travel down multiple paths.

A less widely known quantum behavior is dynamical localization, a phenomenon in which a quantum object stays at the same temperature despite a steady supply of energy—bucking the assumption that a cold object will always steal heat from a warmer object.

This assumption is one of the cornerstones of thermodynamics—the study of how heat moves around. The fact that dynamical localization defies this principle means that something unusual is happening in the quantum world—and that dynamical localization may be an excellent probe of where the quantum domain ends and traditional physics begins. Understanding how quantum systems maintain, or fail to maintain, quantum behavior is essential not only to our understanding of the universe but also to the practical development of quantum technologies.

“At some point, the quantum description of the world has to changeover to the classical description that we see, and it's believed that the way this happens is through interactions,” says UMD postdoctoral researcher Colin Rylands of the Joint Quantum Institute.UCSB labEquipment at the University of California, Santa Barbra for creating and manipulating quantum gases. It is being used to investigate the dynamical localization of interacting atoms, which is related to new work by JQI researchers. (Credit: Tony Mastres, UCSB)

Until now, dynamical localization has only been observed for single quantum objects, which has prevented it from contributing to attempts to pin down where the changeover occurs. To explore this issue, Rylands, together with Prof. Victor Galitski and other colleagues, investigated mathematical models to see if dynamical localization can still arise when many quantum particles interact. To reveal the physics, they had to craft models to account for various temperatures, interaction strengths and lengths of times. The team’s results, published in Physical Review Letters, suggest that dynamical localization can occur even when strong interactions are part of the picture.

“This result is an example of where a single quantum particle behaves completely differently from a classical particle, and then even with the addition of strong interactions the behavior still resembles that of the quantum particle rather than the classical,” says Rylands, who is the first author of the article.

A Quantum Merry-Go-Round

The result extends dynamical localization beyond its single-particle origins, into the regime of many interacting particles. But in order to visualize the effect, it’s still useful to start with a single particle. Often, that single particle is discussed in terms of a rotor, which you can picture as a playground merry-go-round (or anything else that spins in a circle). The energy of a rotor (and its temperature) is directly related to how fast it is spinning. And a rotor with a steady supply of energy—one that is given a regular “kick”—is a convenient way of visualizing the differences in the flow of energy in quantum and classical physics.

For example, imagine Hercules tirelessly swiping at a merry-go-round. Most of his swipes will speed it up, but occasionally a swipe will land poorly and slow it down. Under these (imaginary) conditions, a normal merry-go-round would spin faster and faster, building up more and more energy until vibrations finally shake the whole thing apart. This represents how a normal rotor, in theory, can heat up forever without hitting an energy limit.

In the quantum world, things go down differently. For a quantum merry-go-round each swipe doesn’t simply increase or decrease the speed. Instead, each swipe produces a quantum superposition over different speeds, representing the chance of finding the rotor spinning at different rates. It’s not until you make a measurement that a particular speed emerges from the quantum superposition caused by the preceding kicks.

Previous research, both theoretical and experimental, has shown that at first a quantum rotor doesn’t behave very differently from a normal rotor because of this distinction—on average a quantum merry-go-round will also have more energy after experiencing more kicks. But once a quantum rotor has been kicked enough, its speed tends to plateau. After a certain point, the persistent effort of our quantum Hercules fails to increase the quantum merry-go-round’s energy (on average).

This behavior is conceptually similar to another thermodynamics-defying quantum phenomenon called Anderson localization. Philip Anderson, one of the founders of condensed-matter physics, earned a Noble Prize for the discovery of the phenomenon. He and his colleagues explained how a quantum particle, like an electron, could become trapped despite many apparent opportunities to move. They explained that imperfections in the arrangement of atoms in a solid can lead to quantum interference among the paths available to a quantum particle, changing the likelihood of it taking each path. In Anderson localization, the chance of being on any path becomes almost zero, leaving the particle trapped in place.

Dynamical localization looks a lot like Anderson localization but instead of getting trapped at a particular position, a particle’s energy gets stuck. As a quantum object, a rotor’s energy and thus speed are restricted to a set of quantized values. These values form an abstract grid or lattice similar to the locations of atoms in a solid and can produce an interference among energy states similar to the interference among paths in physical space. The probabilities of the different possible energies, instead of the possible paths of a particle, interfere, and the energy and speed get stuck near a single value, despite ongoing kicks.

Exploring a New Quantum Playground

While Anderson localization provided researchers with a perspective to understand a single kicked quantum rotor, it left some ambiguity about what happens to many interacting rotors that can toss energy back and forth. A common expectation was that the extra interactions would allow normal heating by disrupting the quantum balance that limits the increase of energy.

Galitski and colleagues identified a one-dimensional system where they thought the expectation may not hold true. They chose an interacting one-dimensional Bose gas as their playground. In a Bose gas, particles zipping back and forth down a line play the part of the rotors spinning in place. The gas atoms follow the same basic principles as kicked rotors but are more practical to work with in a lab. In labs, lasers can be used to contain the gas and also to cool the atoms in the gas down to a low temperature, which is essential to ensuring a strong quantum behavior.

Once the team selected this playground, they explored mathematical models of the many interacting gas atoms. Exploring the gas at a variety of temperatures, interaction strengths and number of kicks required the team to switch between several different mathematical techniques to get a full picture. In the end their results combined to suggest that when a gas with strong interactions starts near zero temperature it can experience dynamical localization. The team named this phenomenon “many-body dynamical localization.”

"These results have important implications and fundamentally demonstrate our incomplete understanding of these systems," says Robert Konik, a coauthor of the paper and physicist at Brookhaven National Lab. "They also contain the seed of possible applications because systems that do not accept energy should be less sensitive to quantum decoherence effects and so might be useful for making quantum computers."

Experimental Support

Of course, a theoretical explanation is only half the puzzle; experimental confirmation is essential to knowing if a theory is on solid ground. Fortunately, an experiment on the opposite coast of the U.S. has been pursuing the same topic. Conversations with Galitski inspired David Weld, an associate physics professor at the University of California, Santa Barbra, to use his team’s experimental expertise to probe many-body dynamical localization.

“Usually it's not easy to convince an experimentalist to do an experiment based on theory,” says Galitski. “This case was kind of serendipitous, that David already had almost everything ready to go.”

Weld’s team is using a quantum gas of lithium atoms that is confined by lasers to create an experiment similar to the theoretical model Galitski’s team developed. (The main difference is that in the experiment the atoms move in three dimensions instead of just one.)

In the experiment, Weld and his team kick the atoms hundreds of times using laser pulses and repeatedly observe their fate. For different runs of the experiment they tuned the interaction strength of the atoms to different values.

“It's nice because we can go to a noninteracting regime quite perfectly, and that's something that it's pretty easy to calculate the behavior of,” says Weld. “And then we can continuously turn up the interaction and move into a regime that's more like what Victor and his coworkers are talking about in this latest paper. And we do observe localization, even in the presence of the strongest interactions that we can add to the system. That's been a surprise to me.”

Their preliminary results confirm the prediction that many-body dynamical localization can occur even when strong interactions are part of the picture. This opens new opportunities for researchers to try to pin down the boundary between the quantum and classical world.

“It's nice to be able to show something that people didn't expect and also for it to be experimentally relevant,” says Rylands.

Story by Bailey Bedford

In addition to Rylands, Galitski and Konik, former JQI graduate student Efim Rozenbaum, who is now a consultant with Boston Consulting Group, was also a co-author of the paper.

Research Contact: Colin Rylands This email address is being protected from spambots. You need JavaScript enabled to view it.
 
Media Contact: Bailey Bedford This email address is being protected from spambots. You need JavaScript enabled to view it.
 

Peeking into a World of Spin-3/2 Materials

Researchers have been pushing the frontiers of the quantum world for over a century. And time after time, spin has been a rich source of new physics.

Spin, like mass and electrical charge, is an intrinsic property of quantum particles. It is central to understanding how quantum objects will respond to a magnetic field, and it divides all quantum objects into two types. The half-integer ones, like the spin-1/2 electron, refuse to share the same quantum state, whereas the integer ones, like the spin-1 photon, don’t have a problem cozying up together. So, spin is essential when delving into virtually any topic governed by quantum mechanics, from the Higgs Boson to superconductors.  In a material, the momentum and energy of an electron are tied together by a “dispersion relation” (pictured above). This relationship influences the electrons’ behavior, sometimes making them behave as particles with different quantum properties. (Credit: Igor Boettcher/University of Maryland)In a material, the momentum and energy of an electron are tied together by a “dispersion relation” (pictured above). This relationship influences the electrons’ behavior, sometimes making them behave as particles with different quantum properties. (Credit: Igor Boettcher/University of Maryland)

Yet after almost a century of playing a central role in quantum research, questions about spin remain. For example, why do all the elementary particles that we know about only have spin values of 0, 1/2, or 1? And what new behaviors might exist for particles with spin values greater than 1?

The first question may remain a cosmic mystery, but there are opportunities to explore the second. Inside of a material, a particle’s surroundings can cause it to behave like it has a new spin value. In the past couple years, researchers have discovered materials in which electrons behave like their spin has been bumped up, from 1/2 to 3/2. UMD postdoctoral researcher Igor Boettcher of the Joint Quantum Institute explored the new behaviors these spins might produce in a recent paper featured on the cover of Physical Review Letters.

Instead of looking at a particular material, Boettcher focused on the math that describes interactions between spin-3/2 electrons at low temperatures. These electrons can interact in more ways than their mundane spin-1/2 counterparts, which unlocks new phases—or collective behaviors—that researchers can look for in experiments. Boettcher sifted through the possible phases, searching for the ones that are likely to be stable at low temperatures. He looked at which phases tie up the least energy in the interactions, since as the temperature drops a material becomes most stable in the form containing the least energy (like steam condensing into liquid water and eventually freezing into ice).

He found three promising phases to hunt for in experiments. Which of these phases, if any, arise in a particular material will depend on its unique properties. Still, Boettcher’s predictions provide researchers with signals to keep an eye out for during experiments. If one of the phases forms, he predicts that common measurement techniques will reveal a signature shift in the electrical properties.

Boettcher’s work is an early step in the exploration of spin-3/2 materials. He hopes that one day the field might be comparable to that of graphene, with researchers constantly racing to explore new physics, produce better quality materials, and identify new transport properties.

“I really hope that this will develop into a big field, which will require both experimentalists and the theorists to do their part so that we can really learn something about the spin-3/2 particles and how they interact.” says Boettcher. “This is really just the beginning right now, because these materials just popped up.”

Story by Bailey Bedford

 
Research Contact: Igor Boettcher  This email address is being protected from spambots. You need JavaScript enabled to view it.
 
Media Contact: Bailey Bedford This email address is being protected from spambots. You need JavaScript enabled to view it.