In a Smooth Move, Ions Ditch Disorder and Keep Their Memories

A Persian adage, notably wielded by Abe Lincoln(link is external) and the band OK Go(link is external), expresses the ephemeral nature of the world: “This, too, shall pass.”

Physicists have their own version of this rule. It says that wiggles and wrinkles—really any small disturbances—tend to get ironed out over time. For instance, a couple drops of blue food coloring mixed into some cake batter will impart a blue tint to the whole batch; fresh water from a river funneled into the salty ocean will spread out and make a slightly less salty ocean; and a gush of cold wind entering your room will mingle with the air inside and reach a single, cooler temperature. The basic idea is that, given enough time, everything will reach equilibrium, regardless of where it started.

There are a few notable exceptions to this equanimous rule. In the quantum world of atoms and electrons, particles confined in a container made of electric and magnetic fields—akin to a bowl confining cake batter—can get stuck in place if the container isn’t smooth. When this “bowl” is rough, disorderly, and random, the particles can’t make up their minds about which way to go and instead stay put. Oddly, even when a bunch of these localized particles are allowed to influence each other, they can manage to stay localized, not exchanging energy and avoiding equilibrium. This effect, known as many-body localization (MBL), imparts particles with a kind of memory of where they started.

Now, scientists have found a new way to create disturbances that do not fade away. Instead of relying on disorder to freeze things in place, they tipped the quantum particles’ container to one side—a trick that is easier to conjure in the lab. A collaboration between the experimental group of College Park Professor Christopher Monroe and the theoretical group of Alexey Gorshkov, who is also a Fellow of the Joint Quantum Institute, and the Joint Center for Quantum Information and Computer Science and a physicist at the National Institute of Standards and Technology, has used trapped ions to implement this new technique, confirming that it prevents their quantum particles from reaching equilibrium. The team also measured the slowed spread of information with the new tipping technique for the first time. They published their results(link is external) recently in the journal Nature.

“One advantage of this method of many-body localization is that we don't need that disorder,” says Fangli Liu, former graduate student in physics at the University of Maryland (now a research scientist at QuEra Computing) and lead theorist on the work. “In the original system the disorder is realized in a random form. But with this method, each time you do a measurement you will have exactly the same result. It gives us the possibility to more efficiently use this many-body localization to do something interesting.”Researchers have demonstrated a new way for atomic ions to host disturbances that do not fade away. (credit: E. Edwards/JQI)Researchers have demonstrated a new way for atomic ions to host disturbances that do not fade away. (credit: E. Edwards/JQI)

Instead of color (as in the dough example) or temperature (in the case of air in your room), the disturbance in the JQI experiment was in the ions’ spins—their little internal magnets that can point up or down (or a bit of both at the same time, as in a quantum superposition). These ion spins sit in a container shaped not like a bowl but instead like a single row of an egg carton, with each ion residing in a different dimple of the container. Normally, after some time all spins would point in the same direction uniformly, with no memory of whether each spin pointed up or down to begin with.

By controlling the ions individually, the scientists can prepare one spin that points up while the rest point down. With an egg carton container that’s flat (like it’s sitting on a table), the single spin disturbance can hop between ions, chatting with neighbors and ultimately causing all the ions to agree on a uniform configuration. In traditional many-body localization, where randomness and disorder rule the Normally, ion spins that start out pointing in opposite directions will interact and reach an equilibrium, with no trace of where they started. But when the tilt in their container is large enough, they keep pointing in their original direction, creating a many-body localized state that remembers its initial configuration. (Credit: Adapted from article by the authors/JQI)Normally, ion spins that start out pointing in opposite directions will interact and reach an equilibrium, with no trace of where they started. But when the tilt in their container is large enough, they keep pointing in their original direction, creating a many-body localized state that remembers its initial configuration. (Credit: Adapted from article by the authors/JQI)day, the egg-carton dimples become offset up or down from each other in a random way, paralyzing each spin in its spot.

Instead of adding disorder, the team tilted the egg carton, offsetting each dimple a little higher than its neighbor to the left in a smooth, consistent way. This caused the spins to get localized as well, but for a very different reason. Quantum particles have wave-like properties, and once they start rolling down in the direction of a tilt, they can get reflected by the edges of the egg carton dimples. So instead of rolling downhill forever, they roll down and bounce back up over and over again, which confines them to their small region of the container.  

For a single particle, this pinning mechanism has been known since the 1930s. But whether it would persist in the face of interactions between many particles and halt equilibration has only recently been explored. Indeed, the idea that tilting the egg carton would result in a breakdown of equilibration was only proposed in 2019.

The JQI team confirmed this in their experiment. Using tightly focused lasers, they adjusted each ion individually and prepared them in a highly disturbed state, with spins pointing in alternating directions. At the same time, they had extra lasers shining on all the ions together, allowing them to talk to each other even while far apart. If the tilt was high enough, the team found, the ions’ spins remained in their original configuration for an extended period, refusing to succumb to equilibrium.

In addition to a conceptual leap, creating MBL without disorder may come with certain practical advantages. First, it is experimentally easier to implement a smooth tilt (in fact, a small tilt was present in the JQI experiment whether they wanted it or not). Second, it makes measurements much more straightforward. And third, this method is immune to an accidental break down of MBL. In regular disorder-based MBL, the random offsets of the dimples need to be large. If they aren’t, localization can break down in some spots and infect the whole batch. With a smooth tilt, there’s no such risk.

This opens the possibility of using many-body localization to create a robust memory. MBL might help maintain quantum information in future quantum computers or help preserve curiosities like time crystals or topological phases.

In the past year, two other experiments realizing this method were reported. The team of H. Wang in Hangzhou, China set it up using superconducting qubits(link is external), and Monika Aidelburger’s team in Munich, Germany made it happen(link is external) with ultracold atoms.

“There's a lot of shared themes between our three papers,” says William Morong, a postdoctoral researcher at JQI and lead author on the work, “and I would say all of them together give a more complete picture of the phenomenon then each individually.”

The JQI group was the only one, however, to demonstrate another key property of many-body localization: the slow spread of entanglement between their ions. The team used a technique adapted from nuclear magnetic resonance imaging to measure the crawling pace with which entanglement spread across their atoms, a hallmark of MBL.

“I think that our work shows the exciting progress that has been made in modern quantum simulation platforms,” Morong says, “We are reaching the point where we have enough control over collections of quantum particles in these platforms that we can read a theoretical paper describing some interesting effect that emerges in a specific system, program in the forces that we need to create this effect for ourselves, and measure subtle signatures in the quantum entanglement between the particles that are only revealed when you can observe each particle individually. "

Original story by Dina Genkina:  https://jqi.umd.edu/news/smooth-move-ions-ditch-disorder-and-keep-their-memories

In addition to Liu, Morong, Monroe and Gorshkov, authors on the paper included former graduate student in physics Patrick Becker (now a physicist at Booz Allen Hamilton), graduate student in physics Kate Collins, postdoctoral researcher at JQI Lei Feng, former graduate student in physics (now a postdoctoral fellow at Indiana University in Bloomington) Antonis Kyprianidis, former postdoctoral fellow at JQI (now assistant professor of physics at Rice University) Guido Pagano, and undergraduate researcher Tianyu You.

Research Contact
William Morong: This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)

Graphene’s Magic Act Relies on a Small Twist

Carbon is not the shiniest element, nor the most reactive, nor the rarest. But it is one of the most versatile.

Carbon is the backbone of life on earth and the fossil fuels that have resulted from the demise of ancient life. Carbon is the essential ingredient for turning iron into steel, which underlies technologies from medieval swords to skyscrapers and submarines. And strong, lightweight carbon fibers are used in cars, planes and windmills. Even just carbon on its own is extraordinarily adaptable: It is the only ingredient in (among other things) diamonds, buckyballs(link is external) and graphite(link is external) (the stuff used to make pencil lead).

This last form, graphite, is at first glance the most mundane, but thin sheets of it host a wealth of uncommon physics. Research into individual atom-thick sheets of graphite—called graphene(link is external)—took off after 2004 when scientists developed a reliable way to produce it (using everyday adhesive tape to repeatedly peel layers apart). In 2010 early experiments demonstrating the quantum richness of graphene earned two researchers the Nobel Prize in physics.

In recent years, graphene has kept on giving. Researchers have discovered that stacking layers of graphene two or three at a time (called, respectively, bilayer graphene or trilayer graphene) and twisting the layers relative to each other opens fertile new territory for scientists to explore. Research into these stacked sheets of graphene is like the Wild West, complete with the lure of striking gold and the uncertainty of uncharted territory.

Researchers at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland, including JQI Fellows Sankar Das Sarma and Jay Sau and others, are busy creating the theoretical physics foundation that will be a map of this new landscape. And there is a lot to map; the phenomena in graphene range from the familiar like magnetism to more exotic things like strange metallicity(link is external), different versions of the quantum Hall effect, and the Pomeranchuk effect—each of which involve electrons coordinating to produce unique behaviors. One of the most promising veins for scientific treasure is the appearance of superconductivity (lossless electrical flow) in stacked graphene.

“Here is a system where almost every interesting quantum phase of matter that theorists ever could imagine shows up in a single system as the twist angle, carrier density, and temperature are tuned in a single sample in a single experiment,” says Das Sarma, who is also the Director of the CMTC. “Sounds like magic or science fantasy, except it is happening every day in at least ten laboratories in the world.”

The richness and diversity of the electrical behaviors in graphene stacks has inspired a stampede of research. The 2021 American Physical Society March Meeting included 13 sessions addressing the topics of graphene or twisted bilayers, and Das Sarma hosted a day long virtual conference in June for researchers to discuss twisted graphene and the related research inspired by the topic. The topic of stacked graphene is extensively represented in scientific journals, and the online arXiv preprint server has over 2,000 articles posted about “bilayer graphene”—nearly 1,000 since 2018.

Perhaps surprisingly, graphene’s wealth of quantum research opportunities is tied to its physical simplicity.In a sheet of graphene, a carbon atom sits at the corner of each hexagon. (Credit: Paul Chaikin with modifications by Bailey Bedford)In a sheet of graphene, a carbon atom sits at the corner of each hexagon. (Credit: Paul Chaikin with modifications by Bailey Bedford)

Graphene is a repeating honeycomb sheet with a carbon atom residing at every corner. The carbon atoms hold strongly to one another, making imperfections in the pattern uncommon. Each carbon atom contributes an electron that can freely move between atoms, and electrical currents are very good at traveling through the resulting sheets. Additionally, graphene is lightweight, has a tensile strength(link is external) that is more than 300 times greater than that of steel and is unusually good at absorbing light. These features make it convenient to work with, and it is also easy to obtain.

Graphene’s pure, consistent structure is an excellent embodiment of the physics ideal of a two-dimensional solid material. This makes it the perfect playground for understanding how quantum physics plays out in the material without the researchers having to worry about complications from the additional mess that occurs in most materials. There are then a variety of new properties that are unlocked by stacking layers of graphene on top of each other. Each layer can be rotated (by what scientists call a “twist angle”) or shifted relative to the hexagonal pattern of its neighbors.

Graphene’s structural and electrical properties make it easy to change the quantum landscape that electrons experience in an experiment, giving researchers several options for how to customize, or tune, graphene’s electrical properties. Combining these basic building blocks has already resulted in a wealth of different results, and they aren’t done experimenting.

A ‘Magical’ Flourish

In the quantum world of electrons in graphene, the way that layers sit atop one another is important. When adjacent sheets in a bilayer are twisted with respect to each other, some atoms in the top sheet end up almost right above their corresponding neighbor while in other places atoms end up far away (on an atomic scale) from any atom in the other sheet. These differences form giant, repeating patterns similar to the distribution of atoms in the single sheet but over a much longer scale, as shown in the image at the top of the story and in the interactive visual bellow.

Every change of the angle also changes the scale of the larger pattern that forms the quantum landscape through which the electrons travel. The quantum environments formed by various repeating patterns (or a lack of any organization) are one of the main reasons that electrons behave differently in various materials; in particular, a material’s quantum environment dictates the interactions electrons experience. So each miniscule twist of a graphene layer opens a whole new world of electrical possibilities.

“This twist is really a new tuning knob that was absent before the discovery of these 2D materials,” says Fengcheng Wu, who has worked on graphene research with Das Sarma as a JQI and CMTC postdoc and now collaborates with him as a professor at Wuhan University in China. “In physics, we don't have too many tuning knobs. We have temperature, pressure, magnetic field, and electric field. Now we have a new tuning knob which is a big thing. And this twist angle also provides new opportunities to study physics.”

Researchers have discovered that at a special, small twist angle (about 1.1 degrees)—whimsically named the “magic angle”—the environment is just right to create strong interactions that radically change its properties. When that precise angle is reached, the electrons tend to cluster around certain areas of the graphene, and new electrical behaviors suddenly appear as if summoned with a dramatic magician’s flourish. Magic angle graphene behaves as a poorly-conducting insulator in some circumstances and in other cases goes to the opposite extreme of being a superconductor—a material that transports electricity without any loss of energy.

The discovery of magic-angle graphene and that it has certain quantum behaviors similar to a high-temperature superconductor was the Physics World 2018 Breakthrough of the Year. Superconductors have many valuable potential uses, like revolutionizing energy infrastructure and making efficient maglev trains. Finding a convenient, room-temperature superconductor has been a holy grail for scientists.

The discovery of a promising new form of superconductivity and a plethora of other electrical oddities, all with a convenient new knob to play with, are significant developments, but the most exciting thing for physicists is all the new questions that the discoveries have raised. Das Sarma has investigated many aspects of layered graphene, resulting in more than 15 papers on the topic since 2019; he says two of the questions that most interest him are how graphene becomes superconducting and how it becomes magnetic.

“Various graphene multilayers are turning out to be a richer playground for physics than any other known condensed matter or atomic collective system—the occurrence of superconductivity, magnetism, correlated insulator, strange metal here is coupled with an underlying nontrivial topology, providing an interplay among interaction, band structure, and topology which is unique and unprecedented,” says Das Sarma. “The subject should remain in the forefront of research for a long time."

Strange Bedfellows

Scientists have known about superconductivity and magnetism for a long time, but graphene isn’t where they expected to find them. Finding both individually was a surprise, but scientists have also found the two phenomena occurring simultaneously in some experiments.

Superconductivity and magnetism are usually antagonists, so their presence together in a graphene stack suggests there is something unusual happening. Researchers, like Das Sarma, hope that uncovering which interactions lead to these phenomena in graphene will give them a deeper understanding of the underlying physics and maybe allow them to discover more materials with exotic and useful properties.

A hint at the treasure possibly waiting to be discovered are measurements of twisted bilayer graphene’s electrical properties, which resemble behaviors seen in certain high-temperature superconductors. This suggests that graphene might be crucial to solving the mysteries surrounding high-temperature superconductivity.

The current clues point to the peculiarities of electron interactions being the key to understanding the topic. Superconductivity requires electrons to pair up, so the interactions that drive the pairing in graphene stacks are naturally of interest.

In an article published in Physical Review B(link is external), Das Sarma, Wu and Euyheon Hwang, who was formerly a JQI research scientist and is now a professor at Sungkyunkwan University in South Korea, proposed that what binds pairs of electrons in twisted bilayer graphene may be surprisingly mundane. They think the pairing mechanism may be the same as that in the most well understood superconductors. But they also think that the conventional origin may result in unconventional pairs.

Fengcheng Wu (Credit: Fengcheng Wu)Fengcheng Wu (Credit: Fengcheng Wu)Their analysis suggests that it is not just the interactions that electrons have with each other that are enhanced at the magic angle but also the electron’s interactions with vibrations of the carbon atoms. The vibrations, called phonons, are the quantum mechanical version of sound and other vibrations in materials.

In the best understood superconductors, it is phonons that bind electrons into pairs. In these superconductors, the partnered electrons are required to have opposite values of their spin—a quantum property related to how quantum particles orient themselves in a magnetic field. But the team’s theory suggests that in graphene this traditional pairing mechanism can not only pair electrons with opposite spins but also pair electrons with the same spin. Their description of the pairing method provides a possible explanation to help understand superconductivity in twisted bilayer graphene and graphene-based materials more generally.

“Unconventional superconductivity is highly sought after in physics, as it is exotic on its own and may also find applications in topological quantum computing,” says Wu. “Our theory provides a conventional mechanism towards unconventional superconductivity.”

More recently, Das Sarma, Sau, Wu and Yang-Zhi Chou, who is a JQI and CMTC post-doctoral researcher, collaborated to develop a tool to help scientists understand a variety of graphene stacks. A paper on this research(link is external) was recently accepted in Physical Review Letters. They made a theoretical framework to explore the way that electrons behave on a hexagonal grid. They were inspired by experiments on magic-angle twisted trilayer graphene. Twisted trilayer graphene has the middle layer twisted relative to the top and bottom layers, like a cheese sandwich with the slice twisted so that the corners stick out. This graphene sandwich has attracted attention because it hosts superconductivity at a higher temperature than the two-stack version.

The team’s theoretical model provides a description of the electrons’ behavior in a particular quantum world. Using it on the case of twisted trilayer graphene, they showed that the uncommon pairing of electrons with the same spin could dominate the electrons behavior and be the source of twisted trilayer graphene’s superconductivity.

This new tool provides a starting place for investigating other graphene experiments. And the way the identified pairing mechanism influences the electrons may be significant in future discussions of the role of magnetism in graphene experiments.

Magnetism in stacked graphene is its own mysterious magic trick. Magnetism isn’t found in graphite or single layers of graphene but somehow appears when the stacks align. It’s especially notable because superconductivity and magnetism normally can’t coexist in a material the way they appear to in graphene stacks.

“This unconventional superconducting state in twisted trilayer graphene can resist a large magnetic field, a property that is rarely seen in other known superconducting materials,” says Chou.

In another article in Physical Review B(link is external), Das Sarma and Wu tackled the conundrum of the simultaneous presence of both superconductivity and magnetism in twisted double bilayer graphene—a system like bilayer graphene but where the twist is between two pairs of aligned graphene sheets (for a total of four sheets). This construction with additional layers has attracted attention because it creates a quantum environment that is more sensitive than a basic bilayer to an electric field applied through the stack, giving researchers a greater ability to tweak the superconductivity and magnetism and observe them in different quantum situations.

In the paper, the team provides an explanation for the source of magnetism and how an applied electric field could produce the observed change to a stack’s magnetic behavior. They believe the magnetism arises in a completely different way than it does in more common magnets, like iron-based refrigerator magnets. In an iron magnet, the individual iron atoms each have their own small magnetic field. But the team believes that in graphene the carbon atoms aren’t becoming magnetic. Instead, they think the magnetism comes from electrons that are freely moving throughout the sheet.

Their theory suggests that double bilayer graphene becomes magnetic because of how the electrons push each other apart better in the particular quantum environment. This additional push could lead to the electrons coordinating their individual magnetic fields to make a larger field.

The coordination of electron spins might also be relevant to the pairing of electrons and the formation of potential superconductivity. Spin can be imagined as an arrow that wants to line up with any surrounding magnetic field. Superconductivity normally fails when the magnetism is strong enough that it tears apart the two opposite facing spins. But both spins being aligned in the pairs would explain the two phenomena peacefully coexisting in graphene experiments.

Around the Next Twist in the River

While these theories serve as a guide for researchers pushing forward into the uncharted territory of graphene research, they are far from being a definitive map. At the conference Das Sarma organized in June, a researcher presented new observations(link is external) of superconductivity in three stacked graphene sheets without any twist(link is external). These stacks offset so that none of the layers are right on top of each other; each hexagon has some of its carbon atoms placed at the center of the other layers’ hexagons. The experiment(link is external) revealed two distinct areas of superconductivity, one of which is disturbed by magnetism and the other not. This suggests that the twist may not be the magical ingredient that produces all of the exotic phenomena, but it also raises new questions, offers a route for identifying which electronic behaviors are created or enhanced by the “magic” twist, and provides a new opportunity to investigate the fundamental sources of the underlying physics.Yang-Zhi Chou (Credit: Yang-Zhi Chou)Yang-Zhi Chou (Credit: Yang-Zhi Chou)

Inspired by this work and previous observations of magnetism in the same collaboration of Das Sarma, Sau, Wu and Chou mathematically explored the way phonon coupling of electrons might be playing out in these twist-less stacks. The team’s analysis suggest that phonon pairing is the likely driver of both types of superconductivity, with one occurring with matching spins and one with opposite spins. This work(link is external), led by Chou, was recently accepted in Physical Review Letters and has been chosen as a PRL Editors' Suggestion.

These results represent only a fraction of work on graphene experiments at JQI and the CMTC, and many other researchers have tackled additional aspects of this rich topic. But there remains much to discover and understand before the topic of layered graphene is charted and tamed territory. These early discoveries hint that as researchers dig deeper, they may uncover new veins of research representing a wealth of opportunities to understand new physics and maybe even develop new technologies.

“Applications are hard to predict, but the extreme tunability of these systems showing so many different phases and phenomena makes it likely that there could be applications,” Das Sarma says. “At this stage, it is very exciting fundamental research.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/graphenes-magic-act-relies-on-small-twist

Novel Design May Boost Efficiency of On-Chip Frequency Combs

On the cover of the Pink Floyd album Dark Side of the Moon, a prism splits a ray of light into all the colors of the rainbow. This multicolored medley, which owes its emergence to the fact that light travels as a wave, is almost always hiding in plain sight; a prism simply reveals that it was there. For instance, sunlight is a mixture of many different colors of light, each bobbing up and down with their own characteristic frequency. But taken together the colors merge into a uniform yellowish glow.

A prism, or something like it, can also undo this splitting, mixing a rainbow back into a single beam. Back in the late 1970s, scientists figured out how to generate many colors of light, evenly spaced in frequency, and mix them together—a creation that became known as a frequency comb because of the spiky way the frequencies lined up like the teeth on a comb. They also overlapped the crests of the different frequencies in one spot, making the colors come together to form short pulses of light rather than one continuous beam.

As frequency comb technology developed, scientists realized that they could enable new laboratory developments(link is external), such as ultra-precise optical atomic clocks, and by 2005 frequency combs had earned two scientists a share of the Nobel Prize(link is external) in physics. These days, frequency combs are finding uses in modern technology, by helping self-driving cars to “see” and allowing optical fibers to transmit many channels worth of information at once, among others.

Now, a collaboration of researchers at the University of Maryland (UMD) has proposed a way to make chip-sized frequency combs ten times more efficient by harnessing the power of topology—a field of abstract math that underlies some of the most peculiar behaviors of modern materials. The team, led by Mohammad Hafezi and Kartik Srinivasan, as well as Yanne Chembo, an associate professor of electrical and computer engineering at UMD and a member of the Institute for Research in Electronics and Applied Physics, published their result(link is external) recently in the journal Nature Physics.

“Topology has emerged as a new design principle in optics in the past decade,” says Hafezi, “and it has led to many intriguing new phenomena, some with no electronic counterpart. It would be fascinating if one also finds an application of these ideas.”

Small chips that can generate a frequency comb have been around for almost fifteen years. They are produced with the help of micro-ring resonators—circles of material that sit atop a chip and guide light around in a loop. These circles are usually made of a silicon compound that is 10 to 100 microns in diameter and printed directly on a circuit board.

Light can be sent into the micro-ring from an adjacent piece of silicon compound, deposited in a straight line nearby. If the frequency of light matches one of the natural frequencies of the resonator, the light will go around and around thousands of times—or resonate—building up the light intensity in the ring before leaking back out into the straight-line trace.

Circling around thousands of times gives the light many chances to interact with the silicon (or other compound) it’s traveling through. This interaction causes other colors of light to pop up, distinct from the color sent into the resonator. Some of those colors will also resonate, going around and around the circle and building up power. These resonant colors are at evenly spaced frequencies—they correspond to wavelengths of light that are an integer fraction of the ring circumference, folding neatly into the circle and forcing the frequencies to form the teeth of a comb. At precisely the right input power and color, the crests of all the colors overlap automatically, making a stable comb. The evenly spaced colors that make up the comb come together to form a single, narrow pulse of light circulating around the ring.

“If you tune the power and the frequency of the light going into the resonator to be just right, magically at the output you get these pulses of light,” says Sunil Mittal, a postdoctoral researcher at the Joint Quantum Institute (JQI) and the lead author of the paper.

On-chip frequency combs allow for compact appRendering of a light-guiding lattice of micro-rings that researchers predict will create a highly efficient frequency comb. (Credit: S. Mittal/JQI)Rendering of a light-guiding lattice of micro-rings that researchers predict will create a highly efficient frequency comb. (Credit: S. Mittal/JQI)lications. For example, light detection and ranging (LIDAR) allows self-driving cars to detect what’s around them by bouncing short pulses of light produced by a frequency comb off its surroundings. When the pulse comes back to the car, it’s compared against another frequency comb to get an accurate map of the surroundings. In telecommunications, combs can be used to transmit more information in one optical fiber by writing different data onto each of the comb teeth using a technique called wavelength-division multiplexing (WDM).

But chip-scale frequency combs also have their limitations. In one micro-ring, the fraction of power that can be converted from the input into a comb at the output—the mode efficiency—is fundamentally limited to only 5%.

Mittal, Hafezi, and their collaborators have previously pioneered a micro-ring array with built-in topological protection, and used it to supply single photons on demand and generate made-to-order entangled photons. They wondered if a similar setup—a square lattice of micro-ring resonators with extra “link” rings—could also be adapted to improve frequency comb technology.

In this setting, the micro-rings along the outer edge of the lattice become distinct from all the rings in the middle. Light sent into the lattice spends most of its time along this outer edge and, due to the nature of the topological constraints, it doesn’t scatter into the center. The researchers call this outer circle of micro-rings a super-ring.

The team hoped to find magic conditions that would form a frequency comb in the pulses circulating around the super-ring. But this is tricky: Each of the rings in the lattice can have its own pulse of light circling round and round. To get one big pulse of light going around the super-ring, the pulses within each micro-ring would have to work together, syncing up to form an overall pulse going around the entire boundary.

Mittal and his collaborators didn’t know at what frequency or power this would happen, or if it would work at all. To figure it out, Mittal wrote computer code to simulate how light would traverse the 12 by 12 ring lattice. To the team’s surprise, not only did they find parameters that made the micro-ring pulses sync up into a super-ring pulse, but they also found that the efficiency was a factor of ten higher than possible for a single ring comb.

With “magic” input color and power, a lattice of micro-rings produces a single pulse of light circulating around the super-ring outer edge. This pulse is made up of equally spaced frequencies forming a highly efficient comb. (Credit: S. Mittal/JQI)

This improvement owes everything to the cooperation between micro-rings. The simulation showed that the comb’s teeth were spaced in accordance with the size of individual micro-rings, or wavelengths that fold neatly around the small circle. But if you zoomed in on any of the individual teeth, you’d see that they were really subdivided into smaller, more finely spaced sub-teeth, corresponding to the size of the super-ring.  Simply put, the incoming light was coupled with a few percent efficiency into each of these extra sub-teeth, allowing the aggregate efficiency to top 50%.

The team is working on an experimental demonstration of this topological frequency comb. Using simulations, they were able to single out silicon nitride as a promising material for the micro-rings, as well as figure out what frequency and power of light to send in. They believe constructing their superefficient frequency comb should be within reach of current state-of-the art experimental techniques.

If such a comb is built, it may become important to the future development of several key technologies. The higher efficiency could benefit applications like LIDAR in self-driving cars or compact optical clocks. Additionally, the presence of finely spaced sub-teeth around each individual tooth could, for example, also help add more information channels in a WDM transmitter.

And the team hopes this is just the beginning.  “There could be many applications which we don't even know yet,” says Mittal. “We hope that there'll be much more applications and more people will be interested in this approach.”

Original story by Dina Genkina: https://jqi.umd.edu/news/novel-design-may-boost-efficiency-on-chip-frequency-combs

 In addition to Mittal, Chembo, Hafezi (who is also a professor of physics and of electrical and computer engineering at UMD, as well as a member of the Quantum Technology Center and the The Institute for Research in Electronics and Applied Physics), and Srinivasan (who is also a Fellow of the National Institute of Standards and Technology), the team included Assistant Research Scientist Gregory Moille.

Foundational Step Shows Quantum Computers Can Be Better Than the Sum of Their Parts

Pobody’s nerfect—not even the indifferent, calculating bits that are the foundation of computers. But College Park Professor Christopher Monroe’s group, together with colleagues from Duke University, have made progress toward ensuring we can trust the results of quantum computers(link is external) even when they are built from pieces that sometimes fail. They have shown in an experiment, for the first time, that an assembly of quantum computing pieces can be better than the worst parts used to make it. In a paper published in the journal Nature(link is external) on Oct. 4, 2021, the team shared how they took this landmark step toward reliable, practical quantum computers.

In their experiment, the researchers combined several qubits—the quantum version of bits—so that they functioned together as a single unit called a logical qubit. They created the logical qubit based on a quantum error correction code so that, unlike for the individual physical qubits, errors can be easily detected and corrected, and they made it to be fault-tolerant—capable of containing errors to minimize their negative effects.

“Qubits composed of identical atomic ions are natively very clean by themselves,” says Monroe, who is also a Fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. “However, at some point, when many qubits and operations are required, errors must be reduced further, and it is simpler to add more qubits and encode information differently. The beauty of error correction codes for atomic ions is they can be very efficient and can be flexibly switched on through software controls.”

This is the first time that a logical qubit has been shown to be more reliable than the most error-prone step required to make it. The team was able to successfully put the logical qubit into its starting state and measure it 99.4% of the time, despite relying on six quantum operations that are individually expected to work only about 98.9% of the time.A chip containing an ion trap that researchers use to capture and control atomic ion qubits (quantum bits). (Credit: Kai Hudek/JQI)A chip containing an ion trap that researchers use to capture and control atomic ion qubits (quantum bits). (Credit: Kai Hudek/JQI)

That might not sound like a big difference, but it’s a crucial step in the quest to build much larger quantum computers. If the six quantum operations were assembly line workers, each focused on one task, the assembly line would only produce the correct initial state 93.6% of the time (98.9% multiplied by itself six times)—roughly ten times worse than the error measured in the experiment. That improvement is because in the experiment the imperfect pieces work together to minimize the chance of quantum errors compounding and ruining the result, similar to watchful workers catching each other's mistakes.

The results were achieved using Monroe’s ion-trap system at UMD, which uses up to 32 individual charged atoms—ions—that are cooled with lasers and suspended over electrodes on a chip. They then use each ion as a qubit by manipulating it with lasers.

“We have 32 laser beams,” says Monroe. “And the atoms are like ducks in a row; each with its own fully controllable laser beam. I think of it like the atoms form a linear string and we're plucking it like a guitar string. We're plucking it with lasers that we turn on and off in a programmable way. And that's the computer; that's our central processing unit.”

By successfully creating a fault-tolerant logical qubit with this system, the researchers have shown that careful, creative designs have the potential to unshackle quantum computing from the constraint of the inevitable errors of the current state of the art. Fault-tolerant logical qubits are a way to circumvent the errors in modern qubits and could be the foundation of quantum computers that are both reliable and large enough for practical uses.

Correcting Errors and Tolerating Faults

Developing fault-tolerant qubits capable of error correction is important because Murphy’s law is relentless: No matter how well you build a machine, something eventually goes wrong. In a computer, any bit or qubit has some chance of occasionally failing at its job. And the many qubits involved in a practical quantum computer mean there are many opportunities for errors to creep in.

Fortunately, engineers can design a computer so that its pieces work together to catch errors—like keeping important information backed up to an extra hard drive or having a second person read your important email to catch typos before you send it. Both the people or the drives have to mess up for a mistake to survive. While it takes more work to finish the task, the redundancy helps ensure the final quality.

Some prevalent technologies, like cell phones and high-speed modems, currently use error correction to help ensure the quality of transmissions and avoid other inconveniences. Error correction using simple redundancy can decrease the chance of an uncaught error as long as your procedure isn’t wrong more often than it’s right—for example, sending or storing data in triplicate and trusting the majority vote can drop the chance of an error from one in a hundred to less than one in a thousand.

So while perfection may never be in reach, error correction can make a computer’s performance as good as required, as long as you can afford the price of using extra resources. Researchers plan to use quantum error correction to similarly complement their efforts to make better qubits and allow them to build quantum computers without having to conquer all the errors that quantum devices suffer from.

“What's amazing about fault tolerance, is it's a recipe for how to take small unreliable parts and turn them into a very reliable device,” says Kenneth Brown, a professor of electrical and computer engineering at Duke and a coauthor on the paper. “And fault-tolerant quantum error correction will enable us to make very reliable quantum computers from faulty quantum parts.”

But quantum error correction has unique challenges—qubits are more complex than traditional bits and can go wrong in more ways. You can’t just copy a qubit, or even simply check its value in the middle of a calculation. The whole reason qubits are advantageous is that they can exist in a quantum superposition of multiple states and can become quantum mechanically entangled with each other. To copy a qubit you have to know exactly what information it’s currently storing—in physical terms you have to measure it. And a measurement puts it into a single well-defined quantum state, destroying any superposition or entanglement that the quantum calculation is built on.

So for quantum error correction, you must correct mistakes in bits that you aren’t allowed to copy or even look at too closely. It’s like proofreading while blindfolded. In the mid-1990s, researchers started proposing ways to do this using the subtleties of quantum mechanics, but quantum computers are just reaching the point where they can put the theories to the test.

The key idea is to make a logical qubit out of redundant physical qubits in a way that can check if the qubits agree on certain quantum mechanical facts without ever knowing the state of any of them individually.

Can’t Improve on the Atom

There are many proposed quantum error correction codes to choose from, and some are more natural fits for a particular approach to creating a quantum computer. Each way of making a quantum computer has its own types of errors as well as unique strengths. So building a practical quantum computer requires understanding and working with the particular errors and advantages that your approach brings to the table.

The ion trap-based quantum computer that Monroe and colleagues work with has the advantage that their individual qubits are identical and very stable. Since the qubits are electrically charged ions, each qubit can communicate with all the others in the line through electrical nudges, giving freedom compared to systems that need a solid connection to immediate neighbors.

“They’re atoms of a particular element and isotope so they're perfectly replicable,” says Monroe. “And when you store coherence in the qubits and you leave them alone, it exists essentially forever. So the qubit when left alone is perfect. To make use of that qubit, we have to poke it with lasers, we have to do things to it, we have to hold on to the atom with electrodes in a vacuum chamber, all of those technical things have noise on them, and they can affect the qubit.”

For Monroe’s system, the biggest source of errors is entangling operations—the creation of quantum links between two qubits with laser pulses. Entangling operations are necessary parts of operating a quantum computer and of combining qubits into logical qubits. So while the team can’t hope to make their logical qubits store information more stably than the individual ion qubits, correcting the errors that occur when entangling qubits is a vital improvement.

The researchers selected the Bacon-Shor code as a good match for the advantages and weaknesses of their system. For this project, they only needed 15 of the 32 ions that their system can support, and two of the ions were not used as qubits but were only needed to get an even spacing between the other ions. For the code, they used nine qubits to redundantly encode a single logical qubit and four additional qubits to pick out locations where potential errors occurred. With that information, the detected faulty qubits can, in theory, be corrected without the “quantum-ness” of the qubits being compromised by measuring the state of any individual qubit.

“The key part of quantum error correction is redundancy, which is why we needed nine qubits in order to get one logical qubit,” says Laird Egan (PhD, '21), who is the first author of the paper. “But that redundancy helps us look for errors and correct them, because an error on a single qubit can be protected by the other eight.”

The team successfully used the Bacon-Shor code with the ion-trap system. The resulting logical qubit required six entangling operations—each with an expected error rate between 0.7% and 1.5%. But thanks to the careful design of the code, these errors don't combine into an even higher error rate when the entanglement operations were used to prepare the logical qubit in its initial state.

The team only observed an error in the qubit's preparation and measurement 0.6% of the time—less than the lowest error expected for any of the individual entangling operations. The team was then able to move the logical qubit to a second state with an error of just 0.3%. The team also intentionally introduced errors and demonstrated that they could detect them.

“This is really a demonstration of quantum error correction improving performance of the underlying components for the first time,” says Egan. “And there's no reason that other platforms can't do the same thing as they scale up. It's really a proof of concept that quantum error correction works.”

As the team continues this line of work, they say they hope to achieve similar success in building even more challenging quantum logical gates out of their qubits, performing complete cycles of error correction where the detected errors are actively corrected, and entangling multiple logical qubits together.

“Up until this paper, everyone's been focused on making one logical qubit,” says Egan. “And now that we’ve made one, we're like, ‘Single logical qubits work, so what can you do with two?’”

Original story by Bailey Bedford: https://jqi.umd.edu/news/foundational-step-shows-quantum-computers-can-be-better-sum-their-parts

In addition to Monroe, Brown and Egan, the coauthors of the paper are Marko Cetina, Andrew Risinger, Daiwei Zhu, Debopriyo Biswas, Dripto M. Debroy, Crystal Noel, Michael Newman and  Muyuan Li.

New Approach to Information Transfer Reaches Quantum Speed Limit

Even though quantum computers are a young technology and aren’t yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device.

These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It’s as though no car manufacturer could figure out how to make a model that reached the local highway limit.

But unlike speed limits on roadways, information speed limits can’t be ignored when you’re in a hurry—they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game(link is external) than traffic laws, and achieving the ultimate score is an alluring prize for scientists.In a new quantum protocol, groups of quantum entangled qubits (red dots) recruit more qubits (blue dots) at each step to help rapidly move information from one spot to another. Since more qubits are involved at each step, the protocol creates a snowball effect that achieves the maximum information transfer speed allowed by theory. (Credit: Minh Tran/JQI)In a new quantum protocol, groups of quantum entangled qubits (red dots) recruit more qubits (blue dots) at each step to help rapidly move information from one spot to another. Since more qubits are involved at each step, the protocol creates a snowball effect that achieves the maximum information transfer speed allowed by theory. (Credit: Minh Tran/JQI)

Now a team of researchers, led by Adjunct Associate Professor Alexey Gorshkov, has found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn’t been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Quantum Institute, the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology(link is external), and his colleagues presented their new protocol in a recent article published in the journal Physical Review X(link is external).

“This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol,” says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. “We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound—turns out that wasn't possible. So, we’re excited about this result.”

Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device’s underlying structure. The new protocol is designed for quantum devices where the basic building blocks—qubits—influence each other even when they aren’t right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don’t weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions.

Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.)

In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement. Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly.

This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution.

But that’s maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states.

When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it.

“The new aspect is the way we entangle two blocks of qubits,” Tran says. “Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater.”

The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol’s speed.

“On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster,” Tran says. “And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not.”

In addition to the theoretical insights and possible technological applications, the team’s mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process.

Original story by Bailey Bedford: https://jqi.umd.edu/news/new-approach-information-transfer-reaches-quantum-speed-limit

In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas.