New Photonic Chip Spawns Nested Topological Frequency Comb

Scientists on the hunt for compact and robust sources of multicolored laser light have generated the first topological frequency comb. Their result, which relies on a small silicon nitride chip patterned with hundreds of microscopic rings, will appear in the June 21, 2024 issue of the journal Science.

Light from an ordinary laser shines with a single, sharply defined color—or, equivalently, a single frequency. A frequency comb is like a souped-up laser, but instead of emitting a single frequency of light, a frequency comb shines with many pristine, evenly spaced frequency spikes. The even spacing between the spikes resembles the teeth of a comb, which lends the frequency comb its name.

A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)The earliest frequency combs required bulky equipment to create. More recently researchers have been focused on miniaturizing them into integrated, chip-based platforms. Despite big improvements in shrinking the equipment needed to generate frequency combs, the fundamental ideas haven’t changed. Creating a useful frequency comb requires a stable source of light and a way to disperse that light into the teeth of the comb by taking advantage of optical gain, loss and other effects that emerge when the source of light gets more intense.

In the new work, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and physics at the University of Maryland (UMD), JQI Fellow Kartik Srinivasan, who is also a Fellow of the National Institute of Standards and Technology, and several colleagues have combined two lines of research into a new method for generating frequency combs. One line is attempting to miniaturize the creation of frequency combs using microscopic resonator rings fabricated out of semiconductors. The second involves topological photonics, which uses patterns of repeating structures to create pathways for light that are immune to small imperfections in fabrication.

“The world of frequency combs is exploding in single-ring integrated systems,” says Chris Flower, a graduate student at JQI and the UMD Department of Physics and the lead author of the new paper. “Our idea was essentially, could similar physics be realized in a special lattice of hundreds of coupled rings? It was a pretty major escalation in the complexity of the system.”

By designing a chip with hundreds of resonator rings arranged in a two-dimensional grid, Flower and his colleagues engineered a complex pattern of interference that takes input laser light and circulates it around the edge of the chip while the material of the chip itself splits it up into many frequencies. In the experiment, the researchers took snapshots of the light from above the chip and showed that it was, in fact, circulating around the edge. They also siphoned out some of the light to perform a high-resolution analysis of its frequencies, demonstrating that the circulating light had the structure of a frequency comb twice over. They found one comb with relatively broad teeth and, nestled within each tooth, they found a smaller comb hiding.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.

Although this nested comb is only a proof of concept at the moment—its teeth aren’t quite evenly spaced and they are a bit too noisy to be called pristine—the new device could ultimately lead to smaller and more efficient frequency comb equipment that can be used in atomic clocks, rangefinding detectors, quantum sensors and many other tasks that call for accurate measurements of light. The well-defined spacing between spikes in an ideal frequency comb makes them excellent tools for these measurements. Just as the evenly spaced lines on a ruler provide a way to measure distance, the evenly spaced spikes of a frequency comb allow the measurement of unknown frequencies of light. Mixing a frequency comb with another light source produces a new signal that can reveal the frequencies present in the second source.

Repetition Breeds Repetition

At least qualitatively, the repeating pattern of microscopic ring resonators on the new chip begets the pattern of frequency spikes that circulate around its edge.

Individually, the microrings form tiny little cells that allow photons—the quantum particles of light—to hop from ring to ring. The shape and size of the microrings were carefully chosen to create just the right kind of interference between different hopping paths, and, taken together, the individual rings form a super-ring. Collectively all the rings disperse the input light into the many teeth of the comb and guide them along the edge of the grid.

The microrings and the larger super-ring provide the system with two different time and length scales, since it takes light longer to travel around the larger super-ring than any of the smaller microrings. This ultimately leads to the generation of the two nested frequency combs: One is a coarse comb produced by the smaller microrings, with frequency spikes spaced widely apart. Within each of those coarsely spaced spikes lives a finer comb, produced by the super-ring. The authors say that this nested comb-within-a-comb structure, reminiscent of Russian nesting dolls, could be useful in applications that require precise measurements of two different frequencies that happen to be separated by a wide gap.

Getting Things Right

It took more than four years for the experiment to come together, a problem exacerbated by the fact that only one company in the world could make the chips that the team had designed.

Early chip samples had microrings that were too thick with bends that were too sharp. Once input light passed through these rings, it would scatter in all kinds of unwanted ways, washing out any hope of generating a frequency comb. “The first generation of chips didn’t work at all because of this,” Flower says. Returning to the design, he trimmed down the ring width and rounded out the corners, ultimately landing on a third generation of chips that were delivered in mid-2022.

While iterating on the chip design, Flower and his colleagues also discovered that it would be difficult to deliver enough laser power into the chip. In order for their chip to work, the intensity of the input light needed to exceed a threshold—otherwise no frequency comb would form. Normally they would have reached for a commercial CW laser, which delivers a continuous beam of light. But those lasers delivered too much heat to the chip, causing them to burn out or swell and become misaligned with the light source. They needed to concentrate the energy in bursts to deal with these thermal issues, so they pivoted to a pulsed laser that delivers its energy in a fraction of a second.

But that introduced its own problems: Off-the-shelf pulsed lasers had pulses that were too short and contained too many frequencies. They tended to introduce a jumble of unwanted light—both on the edge of the chip and through its middle—instead of the particular edge-constrained light that the chip was designed to disperse into a frequency comb. Due to the long lead time and expense involved in getting new chips, the team needed to make sure they found a laser that balanced peak power delivery with longer duration, tunable pulses.

“I sent out emails to basically every laser company,” Flower says. “I searched to find somebody who would make me a custom tunable and long-pulse-duration laser. Most people said they don't make that, and they're too busy to do custom lasers. But one company in France got back to me and said, ‘We can do that. Let's talk.’”

His persistence paid off, and, after a couple shipments back and forth from France to install a beefier cooling system for the new laser, the team finally sent the right kind of light into their chip and saw a nested frequency comb come out.

The team says that while their experiment is specific to a chip made from silicon nitride, the design could easily be translated to other photonic materials that could create combs in different frequency bands. They also consider their chip the introduction of a new platform for studying topological photonics, especially in applications where a threshold exists between relatively predictable behavior and more complex effects—like the generation of a frequency comb.

Original story by Chris Cesare: https://jqi.umd.edu/news/new-photonic-chip-spawns-nested-topological-frequency-comb

In addition to Hafezi, Srinivasan and Flower, there were eight other authors of the new paper: Mahmoud Jalali Mehrabad, a postdoctoral researcher at JQI; Lida Xu, a graduate student at JQI; Grégory Moille, an assistant research scientist at JQI; Daniel G. Suarez-Forero, a postdoctoral researcher at JQI; Oğulcan Örsel, a graduate student at the University of Illinois at Urbana-Champaign (UIUC); Gaurav Bahl, a professor of mechanical science and engineering at UIUC; Yanne Chembo, a professor of electrical and computer engineering at UMD and the director of the Institute for Research in Electronics and Applied Physics; and Sunil Mittal, an assistant professor of electrical and computer engineering at Northeastern University and a former postdoctoral researcher at JQI.

This work was supported by the Air Force Office of Scientific Research (FA9550-22-1-0339), the Office of Naval Research (N00014-20-1-2325), the Army Research Laboratory (W911NF1920181), the National Science Foundation (DMR-2019444), and the Minta Martin and Simons Foundations.

 

Attacking Quantum Models with AI: When Can Truncated Neural Networks Deliver Results?

Currently, computing technologies are rapidly evolving and reshaping how we imagine the future. Quantum computing is taking its first toddling steps toward delivering practical results that promise unprecedented abilities. Meanwhile, artificial intelligence remains in public conversation as it’s used for everything from writing business emails to generating bespoke images or songs from text prompts to producing deep fakes.

Some physicists are exploring the opportunities that arise when the power of machine learning—a widely used approach in AI research—is brought to bear on quantum physics. Machine learning may accelerate quantum research and provide insights into quantum technologies, and quantum phenomena present formidable challenges that researchers can use to test the bounds of machine learning.

When studying quantum physics or its applications (including the development of quantum computers), researchers often rely on a detailed description of many interacting quantum particles. But the very features that make quantum computing potentially powerful also make quantum systems difficult to describe using current computers. In some instances, machine learning has produced descriptions that capture the most significant features of quantum systems while ignoring less relevant details—efficiently providing useful approximations.An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)

In a paper published May 20, 2024, in the journal Physical Review Research, two researchers at JQI presented new mathematical tools that will help researchers use machine learning to study quantum physics. And using these tools, they have identified new opportunities in quantum research where machine learning can be applied.

“I want to understand the limit of using traditional classical machine learning tools to understand quantum systems,” says JQI graduate student Ruizhi Pan, who was the first author of the paper.

The standard tool for describing collections of quantum particles is the wavefunction, which provides a complete description of the quantum state of the particles. But obtaining the wavefunction for more than a handful of particles tends to require impractical amounts of time and resources.

Researchers have previously shown that AI can approximate some families of quantum wavefunctions using fewer resources. In particular, physicists, including CMTC Director and JQI Fellow Sankar Das Sarma, have studied how to represent quantum states using neural networks—a common machine learning approach in which webs of connections handle information in ways reminiscent of the neurons firing in a living brain. Artificial neural networks are made of nodes—sometimes called artificial neurons—and connections of various strengths between them.

Today, neural networks take many forms and are applied to diverse applications. Some neural networks analyze data, like inspecting the individual pixels of a picture to tell if it contains a person, while others model a process, like generating a natural-sounding sequence of words given a prompt or selecting moves in a game of chess. The webs of connections formed in neural networks have proven useful at capturing hard-to-identify relationships, patterns and interactions in data and models, including the unique interactions of quantum particles described by wavefunctions.

But neural networks aren’t a magic solution to every situation or even to approximating every wavefunction. Sometimes, to deliver useful results, the network would have to be too big and complex to practically implement. Researchers need a strong theoretical foundation to understand when they are useful and under what circumstances they fall prey to errors.

In the new paper, Pan and JQI Fellow Charles Clark investigated a type of neural network called a restricted Boltzmann machine (RBM), in which the nodes are split into two layers and connections are only allowed between nodes in different layers. One layer is called the visible, or input, layer, and the second is called the hidden layer, since researchers generally don’t directly manipulate or interpret it as much as they do the visible layer.

“The restricted Boltzmann machine is a concept that is derived from theoretical studies of classical ‘spin glass’ systems that are models of disordered magnets,” Clark says. “In the 1980s, Geoffrey Hinton and others applied them to the training of artificial neutral networks, which are now widely used in artificial intelligence. Ruizhi had the idea of using RBMs to study quantum spin systems, and it turned out to be remarkably fruitful.”

For RBM models of quantum systems, physicists frequently use each node of the visible layer to represent a quantum particle, like an individual atom, and use the connections made through the hidden layer to capture the interactions between those particles. As the size and complexity of quantum states grow, a neural net increasingly needs more and more hidden nodes to keep up, eventually becoming unwieldy.

However, the exact relationships between the complexity of a quantum state, the number of hidden nodes used in a neural network, and the resulting accuracy of the approximation are difficult to pin down. This lack of clarity is an example of the black box problem that permeates the field of machine learning. It exists because researchers don’t meticulously engineer the intricate web of a neural network but instead rely on repeated steps of trial and error to find connections that work. This approach often delivers more accurate or efficient results than researchers know how to achieve by working from first principles, but it doesn’t explain why the connections that make up the neural network deliver the desired result—so the results might as well have come from a black box. This built-in inscrutability makes it difficult for physicists to know which quantum models are practical to tackle with neural networks.

Pan and Clark decided to peek behind the veil of the hidden layer and investigate how neural networks boil down the essence of quantum wavefunctions. To do this, they focused on neural network models of a one-dimensional line of quantum spins. A spin is like a little magnetic arrow that wants to point along a magnetic field and is key to understanding how magnets, superconductors and most quantum computers function.

Spins naturally interact by pushing and pulling on each other. Through chains of interactions, even two distant spins can become correlated—meaning that observing one spin also provides information about the other spin. All the correlations between particles tend to drive quantum states into unmanageable complexity. 

Pan and Clark did something that at first glance might not seem relevant to the real world: They imagined and analyzed a neural network that uses infinitely many hidden nodes to model a fixed number of spins.

“In reality of course we don't hope to use a neural network with an infinitely large system size,” Pan says. “We often want to use finite size neural networks to do the numerical computations, so we need to analyze the effects of doing truncations.”

Pan and Clark already knew that using more hidden nodes generally produced more accurate results, but the research community only had a fuzzy understanding of how the accuracy suffers when fewer hidden nodes are used. By backing up and getting a view of the infinite case, Pan and Clark were able to describe the hypothetical, perfectly accurate representation and observe the contributions made by the infinite addition of hidden nodes. The nodes don’t all contribute equally. Some capture the basics of significant features, while many contribute small corrections.

The pair developed a method that sorts the hidden nodes into groups based on how much correlation they capture between spins. Based on this approach, Pan and Clark developed mathematical tools for researchers to use when developing, comparing and interpreting neural networks. With their new perspective and tools, Pan and Clark identified and analyzed the forms of errors they expect to arise from truncating a neural network, and they identified theoretical limits on how big the errors can get in various circumstances. 

In previous work, physicists generally relied on restricting the number of connections allowed for each hidden node to keep the complexity of the neural network in check. This in turn generally limited the reach of interactions between particles that could be modeled—earning the resulting collection of states the name short-range RBM states.

Pan and Clark’s work revealed a chance to apply RBMs outside of those restrictions. They defined a new group of states, called long-range-fast-decay RBM states, that have less strict conditions on hidden node connections but that still often remain accurate and practical to implement. The looser restrictions on the hidden node connections allow a neural network to represent a greater variety of spin states, including ones with interactions stretching farther between particles.

“There are only a few exactly solvable models of quantum spin systems, and their computational complexity grows exponentially with the number of spins,” says Clark. “It is essential to find ways to reduce that complexity. Remarkably, Ruizhi discovered a new class of such systems that are efficiently attacked by RBMs. It’s the old hero-returns-home story: from classical spin glass came the RBM, which grew up among neural networks, and returned home with a gift of order to quantum spin systems.”

The pair’s analysis also suggests that their new tools can be adapted to work for more than just one-dimensional chains of spins, including particles arranged in two or three dimensions. The authors say these insights can help physicists explore the divide between states that are easy to model using RBMs and those that are impractical. The new tools may also guide researchers to be more efficient at pruning a network’s size to save time and resources. Pan says he hopes to further explore the implications of their theoretical framework.

“I'm very happy that I realized my goal of building our research results on a solid mathematical basis,” Pan says. “I'm very excited that I found such a research field which is of great prospect and in which there are also many unknown problems to be solved in the near future.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/attacking-quantum-models-ai-when-can-truncated-neural-networks-deliver-results

IceCube Observes Seven Astrophysical Tau Neutrino Candidates

Neutrinos are tiny, weakly interacting subatomic particles that can travel astronomical distances undisturbed. As such, they can be traced back to their sources, revealing the mysteries surrounding the cosmos. High-energy neutrinos that originate from the farthest reaches beyond our galaxy are called astrophysical neutrinos and are the main subject of study for the IceCube Neutrino Observatory, a cubic-kilometer-sized neutrino telescope at the South Pole. In 2013, IceCube presented its first evidence of high-energy astrophysical neutrinos originating from cosmic accelerators, beginning a new era in astronomy. 

These cosmic messengers come in three different flavors: electron, muon, and tau, with astrophysical tau neutrinos being exceptionally difficult to pin down. Now, in a new study recently accepted as an “Editors’ Suggestion” by Physical Review Letters, the IceCube Collaboration presents the discovery of the once-elusive astrophysical tau neutrinos, a new kind of astrophysical messenger. 

IceCube detects neutrinos using cables (strings) of digital optical modules (DOMs), with a total of 5,160 DOMs embedded deep within the Antarctic ice. When neutrinos interact with molecules in the ice, charged particles are produced that then emit blue light while traveling through the ice, which is then registered and digitized by the individual DOMs. The light produces distinctive patterns, one of which is double cascade events from high-energy tau neutrino interactions within the detector.

The production of a double pulse waveform. The photons from a neutrino interaction (blue) arrive at the top middle DOM at time tI, producing the first peak in the waveform, while photons from the tau lepton decay (purple) arrive at the same DOM at time tD, producing the second peak. Credit: Jack Pairin/IceCube CollaborationThe production of a double pulse waveform. The photons from a neutrino interaction (blue) arrive at the top middle DOM at time tI, producing the first peak in the waveform, while photons from the tau lepton decay (purple) arrive at the same DOM at time tD, producing the second peak. Credit: Jack Pairin/IceCube Collaboration

Since prior IceCube analyses saw hints from searches for subtle signatures produced by astrophysical tau neutrinos, the researchers remained motivated to pinpoint tau neutrinos. After rendering each event into three images (see figure below), they trained convolutional neural networks (CNNs) optimized for image classification to distinguish images produced by tau neutrinos from images produced by various backgrounds. After having simulations run that confirmed its sensitivity to tau neutrinos, the technique was then applied to 10 years of IceCube data acquired between 2011 and 2020. The result was seven strong candidate tau neutrino events. 

“The detection of seven candidate tau neutrino events in the data, combined with the very low amount of expected background, allows us to claim that it is highly unlikely that backgrounds are conspiring to produce seven tau neutrino imposters,” said Doug Cowen, a professor of physics at Penn State University and one of the study leads. “The discovery of astrophysical tau neutrinos also provides a strong confirmation of IceCube’s earlier discovery of the diffuse astrophysical neutrino flux.”

Candidate astrophysical tau neutrino detected on November 13, 2019. Each column corresponds to one of the three neighboring strings of the selected event. Each figure in the top row shows the DOM number, proportional to the depth, versus the time of the digitized PMT signal in 3-ns bins, with the bin color corresponding to the size of the signal in each time bin, for each of the three strings. The total number of photons detected by each string is provided at the upper left in each figure. In the most-illuminated string (left column), the arrival of light from two cascades is visible as two distinct hyperbolas. The bottom row of figures shows the “saliency” for one of the CNNs for each of the three strings. The saliency shows where changes in light level have the greatest impact on the value of the CNN score. The black line superimposed on the saliency plots shows where the light level goes to zero and is effectively an outline of the figures in the top row. The saliency is largest at the leading and trailing edges of the light emitted by the two tau neutrino cascades, showing that the CNN is mainly sensitive to the overall structure of the event. Credit: IceCube CollaborationCandidate astrophysical tau neutrino detected on November 13, 2019. Each column corresponds to one of the three neighboring strings of the selected event. Each figure in the top row shows the DOM number, proportional to the depth, versus the time of the digitized PMT signal in 3-ns bins, with the bin color corresponding to the size of the signal in each time bin, for each of the three strings. The total number of photons detected by each string is provided at the upper left in each figure. In the most-illuminated string (left column), the arrival of light from two cascades is visible as two distinct hyperbolas. The bottom row of figures shows the “saliency” for one of the CNNs for each of the three strings. The saliency shows where changes in light level have the greatest impact on the value of the CNN score. The black line superimposed on the saliency plots shows where the light level goes to zero and is effectively an outline of the figures in the top row. The saliency is largest at the leading and trailing edges of the light emitted by the two tau neutrino cascades, showing that the CNN is mainly sensitive to the overall structure of the event. Credit: IceCube CollaborationCowen added that the probability of the background mimicking the signal was estimated to be less than one in 3.5 million. 

UMD Research Scientist Erik Blaufuss served as an internal reviewer for the analysis, carefully studying the methods and techniques used to make the discovery. Assistant Professor Brian Clark leads the scientific working group in IceCube that produced the result. The IceCube collaboration includes several UMD faculty, including  Kara Hoffman, Greg Sullivan, and Michael Larson, in addition to several graduate students and postdocs. The UMD group plays a leading role in the maintenance and operations of the detector, as well as the simulation and analysis of the data. 

Future analyses will incorporate more of IceCube’s strings, since this study used just three of them. The new analysis would increase the sample of tau neutrinos that can then be used to perform the first three-flavor study of neutrino oscillations—the phenomenon where neutrinos change flavors—over cosmological distances. This type of study could address questions such as the mechanism of neutrino production from astrophysical sources and the properties of space through which neutrinos travel. 

Currently, there is no tool specifically designed to determine the energy and direction of tau neutrinos that produce the signatures seen in this analysis. Such an algorithm could be used to better differentiate a potential tau neutrino signal from background and to help identify candidate tau neutrinos in real time at the South Pole. Similar to current IceCube real-time alerts issued for other neutrino types, alerts for tau neutrinos could be issued to the astronomical community for follow-up studies.

All in all, this exciting discovery comes with the “intriguing possibility of leveraging tau neutrinos to uncover new physics,” said Cowen. 

+ info “Observation of Seven Astrophysical Tau Neutrino Candidates with IceCube,” The IceCube Collaboration: R. Abbasi et al. Accepted by Physical Review Letters. arxiv.org/abs/2403.02516

Original story by 

A Focused Approach Can Help Untangle Messy Quantum Scrambling Problems

The world is a cluttered, noisy place, and the ability to effectively focus is a valuable skill. For example, at a bustling party, the clatter of cutlery, the conversations, the music, the scratching of your shirt tag and almost everything else must fade into the background for you to focus on finding familiar faces or giving the person next to you your undivided attention. 

Similarly, nature and experiments are full of distractions and negligible interactions, so scientists need to deliberately focus their attention on sources of useful information. For instance, the temperature of the crowded party is the result of the energy carried by every molecule in the air, the air currents, the molecules in the air picking up heat as they bounce off the guests and numerous other interactions. But if you just want to measure how warm the room is, you are better off using a thermometer that will give you the average temperature of nearby particles rather than trying to detect and track everything happening from the atomic level on up. A few well-chosen features—like temperature and pressure—are often the key to making sense of a complex phenomenon.

It is especially valuable for researchers to focus their attention when working on quantum physics. Scientists have shown that quantum mechanics accurately describes small particles and their interactions, but the details often become overwhelming when researchers consider many interacting quantum particles. Applying the rules of quantum physics to just a few dozen particles is often more than any physicist—even using a supercomputer—can keep track of. So, in quantum research, scientists frequently need to identify essential features and determine how to use them to extract practical insights without being buried in an avalanche of details.A collection of quantum particles can store information in various collective quantum states. The above model represents the states as blue nodes and illustrates how interactions can scramble the organized information of initial states into a messy combination by mixing the options along the illustrated links. (Credit: Amit Vikram, UMD)A collection of quantum particles can store information in various collective quantum states. The above model represents the states as blue nodes and illustrates how interactions can scramble the organized information of initial states into a messy combination by mixing the options along the illustrated links. (Credit: Amit Vikram, UMD)

In a paper published in the journal Physical Review Letters in January 2024, Professor Victor Galitski and JQI graduate student Amit Vikram identified a new way that researchers can obtain useful insights into the way information associated with a configuration of particles gets dispersed and effectively lost over time. Their technique focuses on a single feature that describes how various amounts of energy can be held by different configurations a quantum system. The approach provides insight into how a collection of quantum particles can evolve without the researchers having to grapple with the intricacies of the interactions that make the system change over time.

This result grew out of a previous project where the pair proposed a definition of chaos for the quantum world. In that project, the pair worked with an equation describing the energy-time uncertainty relationship—the less popular cousin of the Heisenberg uncertainty principle for position and momentum. The Heisenberg uncertainty principle means there’s always a tradeoff between how accurately you can simultaneously know a quantum particle’s position and momentum. The tradeoff described by the energy-time uncertainty relationship is not as neatly defined as its cousin, so researchers must tailor its application to different contexts and be careful how they interpret it. But in general, the relationship means that knowing the energy of a quantum state more precisely increases how long it tends to take the state to shift to a new state.

When Galitski and Vikram were contemplating the energy-time uncertainty relationship they realized it naturally lent itself to studying changes in quantum systems—even those with many particles—without getting bogged down in too many details. Using the relationship, the pair developed an approach that uses just a single feature of a system to calculate how quickly the information contained in an initial collection of quantum particles can mix and diffuse.

The feature they built their method around is called the spectral form factor. It describes the energies that quantum physics allows a system to hold and how common they are—like a map that shows which energies are common and which are rare for a particular quantum system.

The contours of the map are the result of a defining feature of quantum physics—the fact that quantum particles can only be found in certain states with distinct—quantized—energies. And when quantum particles interact, the energy of the whole combination is also limited to certain discrete options. For most quantum systems, some of the allowed energies are only possible for a single combination of the particles, while other energies can result from many different combinations. The availability of the various energy configurations in a system profoundly shapes the resulting physics, making the spectral form factor a valuable tool for researchers.

Galitski and Vikram tailored a formulation of the energy time uncertainty relationship around the spectral form factor to develop their method. The approach naturally applies to the spread of information since information and energy are closely related in quantum physics. 

While studying this diffusion, Galitski and Vikram focused their attention on an open question in physics called the fast-scrambling conjecture, which aims to pin down how long it takes for the organization of an initial collection of particles to be scrambled—to have its information mixed and spread out among all interacting particles until it becomes effectively unrecoverable. The conjecture is not concerned just with the fastest scrambling that is possible for a single case, but instead, it is about how the time that the scrambling takes changes based on the size or complexity of the system. 

Information loss during quantum scrambling is similar to an ice sculpture melting. Suppose a sculptor spelled out the word “swan” in ice and then absentmindedly left it sitting in a tub of water on a sunny day. Initially, you can read the word at a glance. Later, the “s” has dropped onto its side and the top of the “a” has fallen off, making it look like a “u,” but you can still accurately guess what it once spelled. But, at some point, there’s just a puddle of water. It might still be cold, suggesting there was ice recently, but there’s no practical hope of figuring out if the ice was a lifelike swan sculpture, carved into the word “swan” or just a boring block of ice. 

How long the process takes depends on both the ice and the surroundings: Perhaps minutes for a small ice cube in a lake or an entire afternoon for a two-foot-tall centerpiece in a small puddle.

The ice sculpture is like the initial information contained in a portion of the quantum particles, and the surrounding water is all the other quantum particles they can interact with. But, unlike ice, each particle in the quantum world can simultaneously inhabit multiple states, called a quantum superposition, and can become inextricably linked together through quantum entanglement, which makes deducing the original state extra difficult after it has had the chance to change. 

For practical reasons, Galitski and Vikram designed their technique so that it applies to situations where researchers never know the exact states of all the interacting quantum particles. Their approach works for a range of cases spanning those where information is stored in a small chunk of all the interacting quantum particles to ones where the information is on a majority of particles—anything from an ice cube in a lake to a sculpture in a puddle. This gives the technique an advantage over previous approaches that only work for information stored on a few of the original particles.

Using the new technique, the pair can get insight into how long it takes a quantum message to effectively melt away for a wide variety of quantum situations. As long as they know the spectral form factor, they don’t need to know anything else. 

“It's always nice to be able to formulate statements that assume as little as possible, which means they're as general as possible within your basic assumptions,” says Vikram, who is the first author of the paper. “The neat little bonus right now is that the spectral form factor is a quantity that we can in principle measure.”

The ability of researchers to measure the spectral form factor will allow them to use the technique even when many details of the system are a mystery. If scientists don’t have enough details to mathematically derive the spectral form factor or to tailor a custom description of the particles and their interactions, a measured spectral form factor can still provide valuable insights. 

As an example of applying the technique, Galitski and Vikram looked at a quantum model of scrambling called the Sachdev-Ye-Kitaev (SYK) model. Some researchers believe there might be similarities between the SYK model and the way information is scrambled and lost when it falls into a black hole. 

Galitski and Vikram’s results revealed that the scrambling time became increasingly long as they looked at larger and larger numbers of particles instead of settling into conditions that scrambled as rapidly as possible. 

“Large collections of particles take a really long time to lose information into the rest of the system,” Vikram says. “That is something we can get in a very simple way without knowing anything about the structure of the SYK model, other than its energy spectrum. And it's related to things people have been thinking about simplified models for black holes. But the real inside of a black hole may turn out to be something completely different that no one's imagined.”

Galitski and Vikram are hoping future experiments will confirm their results, and they plan to continue looking for more ways to relate a general quantum feature to the resulting dynamics without relying on many specific details. They and their colleagues are also investigating properties of the spectral form factor that every system should satisfy and are working to identify constraints on scrambling that are universal for all quantum systems.

Original story by Bailey Bedford: https://jqi.umd.edu/news/focused-approach-can-help-untangle-messy-quantum-scrambling-problems

This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0001911. 

New Laser Experiment Spins Light Like a Merry-go-round

In day-to-day life, light seems intangible. We walk through it and create and extinguish it with the flip of a switch. But, like matter, light actually carries a little punch—it has momentum. Light constantly nudges things and can even be used to push spacecraft. Light can also spin objects if it carries orbital angular momentum (OAM)—the property associated with a rotating object’s tendency to keep spinning.

Scientists have known that light can have OAM since the early 90s, and they’ve discovered that the OAM of light is associated with swirls or vortices in the light’s phase—the position of the peaks or troughs of the electromagnetic waves that make up the light. Initially, research on OAM focused on vortices that exist in the cross section of a light beam—the phase turning like the propeller of a plane flying along the light’s path. But in recent years, physicists at UMD, led by UMD Physics Professor Howard Milchberg, have discovered that light can carry its OAM in a vortex turned to the side—the phase spins like a wheel on a car, rolling along with the light. The researchers called these light structures spatio-temporal optical vortices (STOVs) and described the momentum they carry as transverse OAM.

“Before our experiments, it wasn’t appreciated that particles of light—photons—could have sideways-pointing OAM,” Milchberg says. “Colleagues initially thought it was weird or wrong. Now research on STOVs is rapidly growing worldwide, with possible applications in areas such as optical communications, nonlinear optics, and exotic forms of microscopy.”

In an article published on Feb. 28, 2024, in the journal Physical Review X, the team describes a novel technique they used to change the transverse OAM of a light pulse as it travels. Their method requires some laboratory tools, like specialized lasers, but in many ways, it resembles spinning a playground merry-go-round or twisting a wrench.Similarities exist between spinning everyday items, like a playground merry-go-round, and spinning vortices of light. Image credit: Martin VorelSimilarities exist between spinning everyday items, like a playground merry-go-round, and spinning vortices of light. Image credit: Martin Vorel

“Because STOVs are a new field, our main goal is gaining a fundamental understanding of how they work. And one of the best ways to do that is to mess with them,” says Scott Hancock, a UMD physics postdoctoral researcher and first author of the paper. “Basically, what are the physics rules for changing the transverse OAM of a light pulse?”

In previous work, Milchberg, Hancock and colleagues described how they created and observed pulses of light that carry transverse OAM, and in a paper published in Physical Review Letters in 2021, they presented a theory that describes how to calculate this OAM and provides a roadmap for changing a STOV’s transverse OAM.

The consequences described in the team’s theory aren’t so different from the physics at play when kids are on a playground. When you spin a merry-go-round you change the angular momentum by pushing it, and the effectiveness of a push depends on where you apply the force—you get nothing from pushing inwards on the axle and the greatest change from pushing sideways on the outer edge. The mass of the merry-go-round and everything on it also impact the angular momentum. For instance, kids jumping off a moving merry-go-round carry away some of the angular momentum, making the merry-go-round easier to stop.

The team’s theory of the transverse OAM of light looks very similar to the physics governing the spin of a merry-go-round. However, their merry-go-round is a disk made of light energy laid out in one dimension of space and another of time instead of two spatial dimensions, and its axis is moving at the speed of light. Their theory predicts that pushing on different parts of a merry-go-round light pulse can change its transverse OAM by different amounts and that if a bit of light is scattered off a speck of dust and leaves the pulse then the pulse loses some transverse OAM with it.

The team focused on testing what happened when they gave the transverse OAM vortices a shove. But changing the transverse OAM of a light pulse isn’t as easy as giving a merry-go-round a solid push; there isn’t any matter to grab onto and apply a force. To change the transverse OAM of a light pulse, you need to flick its phase.

As light journeys through space, its phase naturally shifts, and how fast the phase changes depends on the index of refraction of the material that the light travels through. So Milchberg and the team predicted that if they could create a rapid change in the refractive index at selected locations in the pulse as it flew by, it would flick that portion of the pulse. However, if the entire pulse passes through the area with a new index of refraction, they predicted that there would be no change in OAM—like having someone on the opposite side of a merry-go-round trying to slow it down while you are trying to speed it up.

To test their theory, the team needed to develop the ability to flick a small section of a pulse moving at the speed of light. Luckily, Milchberg’s lab already had invented the appropriate tools. In multiple previous experiments, the group has manipulated light by using lasers for the rapid generation of plasmas—a phase of matter in which electrons have been torn free from their atoms. The process is useful because the plasma brings with it a new index of refraction.

In the new experiment, the team used a laser to make narrow columns of plasma, which they called transient wires, that are small enough and flash into existence quickly enough to target specific regions of the pulse mid-flight. The index of refraction of a transient wire plays the role of a child pushing the merry-go-round.

The researchers generated the transient wire and meticulously aligned all their beams so that the wire precisely intercepted the desired section of the OAM-carrying pulse. After part of the pulse passed through the wire and received a flick, the pulse reached a special optical pulse analyzer the team invented. As predicted, when the researchers analyzed the collected data, they found that the refractive index flick changed the pulse’s transverse OAM.

They then made slight adjustments in the orientation and timing of the transient wire to target different parts of the light pulse. The team performed multiple measurements with the transient wire crossing through the top and bottom of two types of pulses: STOVs that already carried transverse OAM and a second type called a Gaussian pulse without any OAM at all. For the two cases, corresponding to pushing an already spinning or a stationary merry-go-round, they found that the biggest push was achieved by applying the transient wire flick near the top and bottom edges of the light pulse. For each position, they also adjusted the timing of the transient wire laser on various runs so that different amounts of the pulse traveled through the plasma and the vortex received a different amount of kick.Researchers who previously generated vortices of light that they describe as “edge-first flying donuts” have now performed experiments where they disturb the path of the vortices mid-flight to study changes to their momentum.  Image credit: Intense Laser-Matter Interactions Lab, UMDResearchers who previously generated vortices of light that they describe as “edge-first flying donuts” have now performed experiments where they disturb the path of the vortices mid-flight to study changes to their momentum. Image credit: Intense Laser-Matter Interactions Lab, UMD

The team also showed that, like a merry-go-round, pushing with the spin adds OAM and pushing against it removes OAM. Since opposite edges of the optical merry-go-round are traveling in opposite directions, the plasma wire could fulfill both roles by changing its position even though it always pushed in the same direction. The group says the calculations they performed using their theory are in excellent agreement with the results from their experiment.

“It turns out that ultrafast plasma provides a precision test of our transverse OAM theory,” says Milchberg. “It registers a measurable perturbation to the pulse, but not so strong a perturbation that the pulse is completely messed up.”

The team plans to continue exploring the physics associated with transverse OAM. The techniques they have developed could provide new insights into how OAM changes over time during the interaction of an intense laser beam with matter (which is where Milchberg’s lab first discovered transverse OAM). The group plans to investigate applications of transverse OAM, such as encoding information into the swirling pulses of light. Their results from this experiment demonstrate that the naturally occurring fluctuations in the index of refraction of air are too slow to change a pulse’s transverse OAM and distort any information it is carrying.

“It's at an early stage in this research,” Hancock says. “It's hard to say where it will go. But it appears to have a lot of promise for basic physics and applications. Calling it exciting is an understatement.”

Story by Bailey Bedford

In addition to Milchberg, and Hancock, graduate student Andrew Goffin and UMD physics postdoctoral associate Sina Zahedpour were co-authors.