Particle Physics and Quantum Simulation Collide in New Proposal

Quantum particles have unique properties that make them powerful tools, but those very same properties can be the bane of researchers. Each quantum particle can inhabit a combination of multiple possibilities, called a quantum superposition, and together they can form intricate webs of connection through quantum entanglement.

These phenomena are the main ingredients of quantum computers, but they also often make it almost impossible to use traditional tools to track a collection of strongly interacting quantum particles for very long. Both human brains and supercomputers, which each operate using non-quantum building blocks, are easily overwhelmed by the rapid proliferation of the resulting interwoven quantum possibilities. A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)

In nuclear and particle physics, as well as many other areas, the challenges involved in determining the fate of quantum interactions and following the trajectories of particles often hinder research or force scientists to rely heavily on approximations. To counter this, researchers are actively inventing techniques and developing novel computers and simulations that promise to harness the properties of quantum particles in order to provide a clearer window into the quantum world.

Zohreh Davoudi, an associate professor of physics at the University of Maryland and Maryland Center for Fundamental Physics, is working to ensure that the relevant problems in her fields of nuclear and particle physics don’t get overlooked and are instead poised to reap the benefits when quantum simulations mature. To pursue that goal, Davoudi and members of her group are combining their insights into nuclear and particle physics with the expertise of colleagues—like Adjunct Professor Alexey Gorshkov and Ron Belyansky, a former JQI graduate student under Gorshkov and a current postdoctoral associate at the University of Chicago—who are familiar with the theories that quantum technologies are built upon. 

In an article published earlier this year in the journal Physical Review Letters, Belyansky, who is the first author of the paper, together with Davoudi, Gorshkov and their colleagues, proposed a quantum simulation that might be possible to implement soon. They propose using superconducting circuits to simulate a simplified model of collisions between fundamental particles called quarks and mesons (which are themselves made of quarks and antiquarks). In the paper, the group presented the simulation method and discussed what insights the simulations might provide about the creation of particles during energetic collisions. 

Particle collisions—like those at the Large Hadron Collider—break particles into their constituent pieces and release energy that can form new particles. These energetic experiments that spawn new particles are essential to uncovering the basic building blocks of our universe and understanding how they fit together to form everything that exists. When researchers interpret the messy aftermath of collision experiments, they generally rely on simulations to figure out how the experimental data matches the various theories developed by particle physicists.

Quantum simulations are still in their infancy. The team’s proposal is an initial effort that simplifies things by avoiding the complexity of three-dimensional reality, and it represents an early step on the long journey toward quantum simulations that can tackle the most realistic fundamental theories that Davoudi and other particle physicists are most eager to explore. The diverse insights of many theorists and experimentalists must come together and build on each other before quantum simulations will be mature enough to tackle challenging problems, like following the evolution of matter after highly energetic collisions.

“We, as theorists, try to come up with ideas and proposals that not only are interesting from the perspective of applications but also from the perspective of giving experimentalists the motivation to go to the next level and push to add more capabilities to the hardware,” says Davoudi, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a Senior Investigator at the Institute for Robust Quantum Simulation (RQS).“There was a lot of back and forth regarding which model and which platform. We learned a lot in the process; we explored many different routes.” 

A Quantum Solution to a Quantum Problem

The meetings with Davoudi and her group brought particle physics concepts to Belyansky’s attention. Those ideas were bouncing around inside his head when he came across a mathematical tool that allows physicists to translate a model into a language where particle behaviors look fundamentally different. The ideas collided and crystallized into a possible method to efficiently simulate a simple particle physics model, called the Schwinger model. The key was getting the model into a form that could be efficiently represented on a particular quantum device. 

Belyansky had stumbled upon a tool for mapping between certain theories that describe fermions and theories that describe bosons. Every fundamental quantum particle is either a fermion or boson, and whether a particle is one or the other governs how it behaves. If a particle is a fermion, like protons, quarks and electrons, then no two of that type of particle can ever share the same quantum state. In contrast, bosons, like the mesons formed by quarks, are willing to share the same state with any number of their identical brethren. Switching between two descriptions of a theory can provide researchers with entirely new tools for tackling a problem.

Based on Belyansky’s insight, the group determined that translating the fermion-based description of the Schwinger model into the language of bosons could be useful for simulating quark and meson collisions. The translation put the model into a form that more naturally mapped onto the technology of circuit quantum electrodynamics (QED). Circuit QED uses light trapped in superconducting circuits to create artificial atoms, which can be used as the building blocks of quantum computers and quantum simulations. The pieces of a circuit can combine to behave like a boson, and the group mapped the boson behavior onto the behavior of quarks and mesons during collisions.

This type of simulation that uses a device’s natural behaviors to directly mimic a behavior of interest is called an analog simulation. This approach is generally more efficient than designing simulations to be compatible with diverse quantum computers. And since analog approaches lean into the underlying technology’s natural behavior, they can play to the strengths of early quantum devices. In the paper, the team described how their analog simulation could run on a relatively simple quantum device without relying on many approximations.

"It is particularly exciting to contribute to the development of analog quantum simulators—like the one we propose—since they are likely to be among the first truly useful applications of quantum computers," says Gorshkov, who is also a Physicist at the National Institute of Standards and Technology, a QuICS Fellow and an RQS Senior Investigator.

The translation technique Belyansky and his collaborators used has a limitation: It only works in one space dimension. The restriction to one dimension means that the model is unable to replicate real experiments, but it also makes things much simpler and provides a more practical goal for early quantum simulations. Physicists call this sort of simplified case a toy model. The team decided this one-dimensional model was worth studying because its description of the force that binds quarks into mesons—the strong force—still shares features with how it behaves in three space dimensions.

“Playing around with these toy models and being able to actually see the outcome of these quantum mechanical collision processes would give us some insight as to what might go on in actual strong force processes and may lead to a prediction for experiments,” Davoudi says. “That's sort of the beauty of it.” 

Scouting Ahead with Current Computers 

The researchers did more than lay out a proposal for experimentally implementing their simulations using quantum technology. By focusing on the model under restrictions, like limiting the collision energy, they simplified the calculations enough to explore certain scenarios using a regular computer without any quantum advantages.

Even with the imposed limitations, the simplified model was still able to simulate more than the most basic collisions. Some of the simulations describe collisions that spawned new particles instead of merely featuring the initial quarks and mesons bouncing around without anything new popping up. The creation of particles during collisions is an important feature that prior simulation methods fell short of capturing.

These results help illustrate the potential of the approach to provide insights into how particle collisions produce new particles. While similar simulation techniques that don’t harness quantum power will always be limited, they will remain useful for future quantum research: Researchers can use them in identifying which quantum simulations have the most potential and in confirming if a quantum simulation is performing as expected.

Continuing the Journey

There is still a lot of work to be done before Davoudi and her collaborators can achieve their goal of simulating more realistic models in nuclear and particle physics. Belyansky says that both one-dimensional toy models and the tools they used in this project will likely deliver more results moving forward.

“To get to the ultimate goal, we need to add more ingredients,” Belyansky says. “Adding more dimensions is difficult, but even in one dimension, we can make things more complicated. And on the experimental side, people need to build these things.”

For her part, Davoudi is continuing to collaborate with several research groups to develop quantum simulations for nuclear and particle physics research. 

“I'm excited to continue this kind of multidisciplinary collaboration, where I learn about these simpler, more experimentally feasible models that have features in common with theories of interest in my field and to try to see whether we can achieve the goal of realizing them in quantum simulators,” Davoudi says. “I'm hoping that this continues, that we don't stop here.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/particle-physics-and-quantum-simulation-collide-new-proposal

 

New Photonic Chip Spawns Nested Topological Frequency Comb

Scientists on the hunt for compact and robust sources of multicolored laser light have generated the first topological frequency comb. Their result, which relies on a small silicon nitride chip patterned with hundreds of microscopic rings, will appear in the June 21, 2024 issue of the journal Science.

Light from an ordinary laser shines with a single, sharply defined color—or, equivalently, a single frequency. A frequency comb is like a souped-up laser, but instead of emitting a single frequency of light, a frequency comb shines with many pristine, evenly spaced frequency spikes. The even spacing between the spikes resembles the teeth of a comb, which lends the frequency comb its name.

A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)The earliest frequency combs required bulky equipment to create. More recently researchers have been focused on miniaturizing them into integrated, chip-based platforms. Despite big improvements in shrinking the equipment needed to generate frequency combs, the fundamental ideas haven’t changed. Creating a useful frequency comb requires a stable source of light and a way to disperse that light into the teeth of the comb by taking advantage of optical gain, loss and other effects that emerge when the source of light gets more intense.

In the new work, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and physics at the University of Maryland (UMD), JQI Fellow Kartik Srinivasan, who is also a Fellow of the National Institute of Standards and Technology, and several colleagues have combined two lines of research into a new method for generating frequency combs. One line is attempting to miniaturize the creation of frequency combs using microscopic resonator rings fabricated out of semiconductors. The second involves topological photonics, which uses patterns of repeating structures to create pathways for light that are immune to small imperfections in fabrication.

“The world of frequency combs is exploding in single-ring integrated systems,” says Chris Flower, a graduate student at JQI and the UMD Department of Physics and the lead author of the new paper. “Our idea was essentially, could similar physics be realized in a special lattice of hundreds of coupled rings? It was a pretty major escalation in the complexity of the system.”

By designing a chip with hundreds of resonator rings arranged in a two-dimensional grid, Flower and his colleagues engineered a complex pattern of interference that takes input laser light and circulates it around the edge of the chip while the material of the chip itself splits it up into many frequencies. In the experiment, the researchers took snapshots of the light from above the chip and showed that it was, in fact, circulating around the edge. They also siphoned out some of the light to perform a high-resolution analysis of its frequencies, demonstrating that the circulating light had the structure of a frequency comb twice over. They found one comb with relatively broad teeth and, nestled within each tooth, they found a smaller comb hiding.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.

Although this nested comb is only a proof of concept at the moment—its teeth aren’t quite evenly spaced and they are a bit too noisy to be called pristine—the new device could ultimately lead to smaller and more efficient frequency comb equipment that can be used in atomic clocks, rangefinding detectors, quantum sensors and many other tasks that call for accurate measurements of light. The well-defined spacing between spikes in an ideal frequency comb makes them excellent tools for these measurements. Just as the evenly spaced lines on a ruler provide a way to measure distance, the evenly spaced spikes of a frequency comb allow the measurement of unknown frequencies of light. Mixing a frequency comb with another light source produces a new signal that can reveal the frequencies present in the second source.

Repetition Breeds Repetition

At least qualitatively, the repeating pattern of microscopic ring resonators on the new chip begets the pattern of frequency spikes that circulate around its edge.

Individually, the microrings form tiny little cells that allow photons—the quantum particles of light—to hop from ring to ring. The shape and size of the microrings were carefully chosen to create just the right kind of interference between different hopping paths, and, taken together, the individual rings form a super-ring. Collectively all the rings disperse the input light into the many teeth of the comb and guide them along the edge of the grid.

The microrings and the larger super-ring provide the system with two different time and length scales, since it takes light longer to travel around the larger super-ring than any of the smaller microrings. This ultimately leads to the generation of the two nested frequency combs: One is a coarse comb produced by the smaller microrings, with frequency spikes spaced widely apart. Within each of those coarsely spaced spikes lives a finer comb, produced by the super-ring. The authors say that this nested comb-within-a-comb structure, reminiscent of Russian nesting dolls, could be useful in applications that require precise measurements of two different frequencies that happen to be separated by a wide gap.

Getting Things Right

It took more than four years for the experiment to come together, a problem exacerbated by the fact that only one company in the world could make the chips that the team had designed.

Early chip samples had microrings that were too thick with bends that were too sharp. Once input light passed through these rings, it would scatter in all kinds of unwanted ways, washing out any hope of generating a frequency comb. “The first generation of chips didn’t work at all because of this,” Flower says. Returning to the design, he trimmed down the ring width and rounded out the corners, ultimately landing on a third generation of chips that were delivered in mid-2022.

While iterating on the chip design, Flower and his colleagues also discovered that it would be difficult to deliver enough laser power into the chip. In order for their chip to work, the intensity of the input light needed to exceed a threshold—otherwise no frequency comb would form. Normally they would have reached for a commercial CW laser, which delivers a continuous beam of light. But those lasers delivered too much heat to the chip, causing them to burn out or swell and become misaligned with the light source. They needed to concentrate the energy in bursts to deal with these thermal issues, so they pivoted to a pulsed laser that delivers its energy in a fraction of a second.

But that introduced its own problems: Off-the-shelf pulsed lasers had pulses that were too short and contained too many frequencies. They tended to introduce a jumble of unwanted light—both on the edge of the chip and through its middle—instead of the particular edge-constrained light that the chip was designed to disperse into a frequency comb. Due to the long lead time and expense involved in getting new chips, the team needed to make sure they found a laser that balanced peak power delivery with longer duration, tunable pulses.

“I sent out emails to basically every laser company,” Flower says. “I searched to find somebody who would make me a custom tunable and long-pulse-duration laser. Most people said they don't make that, and they're too busy to do custom lasers. But one company in France got back to me and said, ‘We can do that. Let's talk.’”

His persistence paid off, and, after a couple shipments back and forth from France to install a beefier cooling system for the new laser, the team finally sent the right kind of light into their chip and saw a nested frequency comb come out.

The team says that while their experiment is specific to a chip made from silicon nitride, the design could easily be translated to other photonic materials that could create combs in different frequency bands. They also consider their chip the introduction of a new platform for studying topological photonics, especially in applications where a threshold exists between relatively predictable behavior and more complex effects—like the generation of a frequency comb.

Original story by Chris Cesare: https://jqi.umd.edu/news/new-photonic-chip-spawns-nested-topological-frequency-comb

In addition to Hafezi, Srinivasan and Flower, there were eight other authors of the new paper: Mahmoud Jalali Mehrabad, a postdoctoral researcher at JQI; Lida Xu, a graduate student at JQI; Grégory Moille, an assistant research scientist at JQI; Daniel G. Suarez-Forero, a postdoctoral researcher at JQI; Oğulcan Örsel, a graduate student at the University of Illinois at Urbana-Champaign (UIUC); Gaurav Bahl, a professor of mechanical science and engineering at UIUC; Yanne Chembo, a professor of electrical and computer engineering at UMD and the director of the Institute for Research in Electronics and Applied Physics; and Sunil Mittal, an assistant professor of electrical and computer engineering at Northeastern University and a former postdoctoral researcher at JQI.

This work was supported by the Air Force Office of Scientific Research (FA9550-22-1-0339), the Office of Naval Research (N00014-20-1-2325), the Army Research Laboratory (W911NF1920181), the National Science Foundation (DMR-2019444), and the Minta Martin and Simons Foundations.

 

Attacking Quantum Models with AI: When Can Truncated Neural Networks Deliver Results?

Currently, computing technologies are rapidly evolving and reshaping how we imagine the future. Quantum computing is taking its first toddling steps toward delivering practical results that promise unprecedented abilities. Meanwhile, artificial intelligence remains in public conversation as it’s used for everything from writing business emails to generating bespoke images or songs from text prompts to producing deep fakes.

Some physicists are exploring the opportunities that arise when the power of machine learning—a widely used approach in AI research—is brought to bear on quantum physics. Machine learning may accelerate quantum research and provide insights into quantum technologies, and quantum phenomena present formidable challenges that researchers can use to test the bounds of machine learning.

When studying quantum physics or its applications (including the development of quantum computers), researchers often rely on a detailed description of many interacting quantum particles. But the very features that make quantum computing potentially powerful also make quantum systems difficult to describe using current computers. In some instances, machine learning has produced descriptions that capture the most significant features of quantum systems while ignoring less relevant details—efficiently providing useful approximations.An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)

In a paper published May 20, 2024, in the journal Physical Review Research, two researchers at JQI presented new mathematical tools that will help researchers use machine learning to study quantum physics. And using these tools, they have identified new opportunities in quantum research where machine learning can be applied.

“I want to understand the limit of using traditional classical machine learning tools to understand quantum systems,” says JQI graduate student Ruizhi Pan, who was the first author of the paper.

The standard tool for describing collections of quantum particles is the wavefunction, which provides a complete description of the quantum state of the particles. But obtaining the wavefunction for more than a handful of particles tends to require impractical amounts of time and resources.

Researchers have previously shown that AI can approximate some families of quantum wavefunctions using fewer resources. In particular, physicists, including CMTC Director and JQI Fellow Sankar Das Sarma, have studied how to represent quantum states using neural networks—a common machine learning approach in which webs of connections handle information in ways reminiscent of the neurons firing in a living brain. Artificial neural networks are made of nodes—sometimes called artificial neurons—and connections of various strengths between them.

Today, neural networks take many forms and are applied to diverse applications. Some neural networks analyze data, like inspecting the individual pixels of a picture to tell if it contains a person, while others model a process, like generating a natural-sounding sequence of words given a prompt or selecting moves in a game of chess. The webs of connections formed in neural networks have proven useful at capturing hard-to-identify relationships, patterns and interactions in data and models, including the unique interactions of quantum particles described by wavefunctions.

But neural networks aren’t a magic solution to every situation or even to approximating every wavefunction. Sometimes, to deliver useful results, the network would have to be too big and complex to practically implement. Researchers need a strong theoretical foundation to understand when they are useful and under what circumstances they fall prey to errors.

In the new paper, Pan and JQI Fellow Charles Clark investigated a type of neural network called a restricted Boltzmann machine (RBM), in which the nodes are split into two layers and connections are only allowed between nodes in different layers. One layer is called the visible, or input, layer, and the second is called the hidden layer, since researchers generally don’t directly manipulate or interpret it as much as they do the visible layer.

“The restricted Boltzmann machine is a concept that is derived from theoretical studies of classical ‘spin glass’ systems that are models of disordered magnets,” Clark says. “In the 1980s, Geoffrey Hinton and others applied them to the training of artificial neutral networks, which are now widely used in artificial intelligence. Ruizhi had the idea of using RBMs to study quantum spin systems, and it turned out to be remarkably fruitful.”

For RBM models of quantum systems, physicists frequently use each node of the visible layer to represent a quantum particle, like an individual atom, and use the connections made through the hidden layer to capture the interactions between those particles. As the size and complexity of quantum states grow, a neural net increasingly needs more and more hidden nodes to keep up, eventually becoming unwieldy.

However, the exact relationships between the complexity of a quantum state, the number of hidden nodes used in a neural network, and the resulting accuracy of the approximation are difficult to pin down. This lack of clarity is an example of the black box problem that permeates the field of machine learning. It exists because researchers don’t meticulously engineer the intricate web of a neural network but instead rely on repeated steps of trial and error to find connections that work. This approach often delivers more accurate or efficient results than researchers know how to achieve by working from first principles, but it doesn’t explain why the connections that make up the neural network deliver the desired result—so the results might as well have come from a black box. This built-in inscrutability makes it difficult for physicists to know which quantum models are practical to tackle with neural networks.

Pan and Clark decided to peek behind the veil of the hidden layer and investigate how neural networks boil down the essence of quantum wavefunctions. To do this, they focused on neural network models of a one-dimensional line of quantum spins. A spin is like a little magnetic arrow that wants to point along a magnetic field and is key to understanding how magnets, superconductors and most quantum computers function.

Spins naturally interact by pushing and pulling on each other. Through chains of interactions, even two distant spins can become correlated—meaning that observing one spin also provides information about the other spin. All the correlations between particles tend to drive quantum states into unmanageable complexity. 

Pan and Clark did something that at first glance might not seem relevant to the real world: They imagined and analyzed a neural network that uses infinitely many hidden nodes to model a fixed number of spins.

“In reality of course we don't hope to use a neural network with an infinitely large system size,” Pan says. “We often want to use finite size neural networks to do the numerical computations, so we need to analyze the effects of doing truncations.”

Pan and Clark already knew that using more hidden nodes generally produced more accurate results, but the research community only had a fuzzy understanding of how the accuracy suffers when fewer hidden nodes are used. By backing up and getting a view of the infinite case, Pan and Clark were able to describe the hypothetical, perfectly accurate representation and observe the contributions made by the infinite addition of hidden nodes. The nodes don’t all contribute equally. Some capture the basics of significant features, while many contribute small corrections.

The pair developed a method that sorts the hidden nodes into groups based on how much correlation they capture between spins. Based on this approach, Pan and Clark developed mathematical tools for researchers to use when developing, comparing and interpreting neural networks. With their new perspective and tools, Pan and Clark identified and analyzed the forms of errors they expect to arise from truncating a neural network, and they identified theoretical limits on how big the errors can get in various circumstances. 

In previous work, physicists generally relied on restricting the number of connections allowed for each hidden node to keep the complexity of the neural network in check. This in turn generally limited the reach of interactions between particles that could be modeled—earning the resulting collection of states the name short-range RBM states.

Pan and Clark’s work revealed a chance to apply RBMs outside of those restrictions. They defined a new group of states, called long-range-fast-decay RBM states, that have less strict conditions on hidden node connections but that still often remain accurate and practical to implement. The looser restrictions on the hidden node connections allow a neural network to represent a greater variety of spin states, including ones with interactions stretching farther between particles.

“There are only a few exactly solvable models of quantum spin systems, and their computational complexity grows exponentially with the number of spins,” says Clark. “It is essential to find ways to reduce that complexity. Remarkably, Ruizhi discovered a new class of such systems that are efficiently attacked by RBMs. It’s the old hero-returns-home story: from classical spin glass came the RBM, which grew up among neural networks, and returned home with a gift of order to quantum spin systems.”

The pair’s analysis also suggests that their new tools can be adapted to work for more than just one-dimensional chains of spins, including particles arranged in two or three dimensions. The authors say these insights can help physicists explore the divide between states that are easy to model using RBMs and those that are impractical. The new tools may also guide researchers to be more efficient at pruning a network’s size to save time and resources. Pan says he hopes to further explore the implications of their theoretical framework.

“I'm very happy that I realized my goal of building our research results on a solid mathematical basis,” Pan says. “I'm very excited that I found such a research field which is of great prospect and in which there are also many unknown problems to be solved in the near future.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/attacking-quantum-models-ai-when-can-truncated-neural-networks-deliver-results

IceCube Observes Seven Astrophysical Tau Neutrino Candidates

Neutrinos are tiny, weakly interacting subatomic particles that can travel astronomical distances undisturbed. As such, they can be traced back to their sources, revealing the mysteries surrounding the cosmos. High-energy neutrinos that originate from the farthest reaches beyond our galaxy are called astrophysical neutrinos and are the main subject of study for the IceCube Neutrino Observatory, a cubic-kilometer-sized neutrino telescope at the South Pole. In 2013, IceCube presented its first evidence of high-energy astrophysical neutrinos originating from cosmic accelerators, beginning a new era in astronomy. 

These cosmic messengers come in three different flavors: electron, muon, and tau, with astrophysical tau neutrinos being exceptionally difficult to pin down. Now, in a new study recently accepted as an “Editors’ Suggestion” by Physical Review Letters, the IceCube Collaboration presents the discovery of the once-elusive astrophysical tau neutrinos, a new kind of astrophysical messenger. 

IceCube detects neutrinos using cables (strings) of digital optical modules (DOMs), with a total of 5,160 DOMs embedded deep within the Antarctic ice. When neutrinos interact with molecules in the ice, charged particles are produced that then emit blue light while traveling through the ice, which is then registered and digitized by the individual DOMs. The light produces distinctive patterns, one of which is double cascade events from high-energy tau neutrino interactions within the detector.

The production of a double pulse waveform. The photons from a neutrino interaction (blue) arrive at the top middle DOM at time tI, producing the first peak in the waveform, while photons from the tau lepton decay (purple) arrive at the same DOM at time tD, producing the second peak. Credit: Jack Pairin/IceCube CollaborationThe production of a double pulse waveform. The photons from a neutrino interaction (blue) arrive at the top middle DOM at time tI, producing the first peak in the waveform, while photons from the tau lepton decay (purple) arrive at the same DOM at time tD, producing the second peak. Credit: Jack Pairin/IceCube Collaboration

Since prior IceCube analyses saw hints from searches for subtle signatures produced by astrophysical tau neutrinos, the researchers remained motivated to pinpoint tau neutrinos. After rendering each event into three images (see figure below), they trained convolutional neural networks (CNNs) optimized for image classification to distinguish images produced by tau neutrinos from images produced by various backgrounds. After having simulations run that confirmed its sensitivity to tau neutrinos, the technique was then applied to 10 years of IceCube data acquired between 2011 and 2020. The result was seven strong candidate tau neutrino events. 

“The detection of seven candidate tau neutrino events in the data, combined with the very low amount of expected background, allows us to claim that it is highly unlikely that backgrounds are conspiring to produce seven tau neutrino imposters,” said Doug Cowen, a professor of physics at Penn State University and one of the study leads. “The discovery of astrophysical tau neutrinos also provides a strong confirmation of IceCube’s earlier discovery of the diffuse astrophysical neutrino flux.”

Candidate astrophysical tau neutrino detected on November 13, 2019. Each column corresponds to one of the three neighboring strings of the selected event. Each figure in the top row shows the DOM number, proportional to the depth, versus the time of the digitized PMT signal in 3-ns bins, with the bin color corresponding to the size of the signal in each time bin, for each of the three strings. The total number of photons detected by each string is provided at the upper left in each figure. In the most-illuminated string (left column), the arrival of light from two cascades is visible as two distinct hyperbolas. The bottom row of figures shows the “saliency” for one of the CNNs for each of the three strings. The saliency shows where changes in light level have the greatest impact on the value of the CNN score. The black line superimposed on the saliency plots shows where the light level goes to zero and is effectively an outline of the figures in the top row. The saliency is largest at the leading and trailing edges of the light emitted by the two tau neutrino cascades, showing that the CNN is mainly sensitive to the overall structure of the event. Credit: IceCube CollaborationCandidate astrophysical tau neutrino detected on November 13, 2019. Each column corresponds to one of the three neighboring strings of the selected event. Each figure in the top row shows the DOM number, proportional to the depth, versus the time of the digitized PMT signal in 3-ns bins, with the bin color corresponding to the size of the signal in each time bin, for each of the three strings. The total number of photons detected by each string is provided at the upper left in each figure. In the most-illuminated string (left column), the arrival of light from two cascades is visible as two distinct hyperbolas. The bottom row of figures shows the “saliency” for one of the CNNs for each of the three strings. The saliency shows where changes in light level have the greatest impact on the value of the CNN score. The black line superimposed on the saliency plots shows where the light level goes to zero and is effectively an outline of the figures in the top row. The saliency is largest at the leading and trailing edges of the light emitted by the two tau neutrino cascades, showing that the CNN is mainly sensitive to the overall structure of the event. Credit: IceCube CollaborationCowen added that the probability of the background mimicking the signal was estimated to be less than one in 3.5 million. 

UMD Research Scientist Erik Blaufuss served as an internal reviewer for the analysis, carefully studying the methods and techniques used to make the discovery. Assistant Professor Brian Clark leads the scientific working group in IceCube that produced the result. The IceCube collaboration includes several UMD faculty, including  Kara Hoffman, Greg Sullivan, and Michael Larson, in addition to several graduate students and postdocs. The UMD group plays a leading role in the maintenance and operations of the detector, as well as the simulation and analysis of the data. 

Future analyses will incorporate more of IceCube’s strings, since this study used just three of them. The new analysis would increase the sample of tau neutrinos that can then be used to perform the first three-flavor study of neutrino oscillations—the phenomenon where neutrinos change flavors—over cosmological distances. This type of study could address questions such as the mechanism of neutrino production from astrophysical sources and the properties of space through which neutrinos travel. 

Currently, there is no tool specifically designed to determine the energy and direction of tau neutrinos that produce the signatures seen in this analysis. Such an algorithm could be used to better differentiate a potential tau neutrino signal from background and to help identify candidate tau neutrinos in real time at the South Pole. Similar to current IceCube real-time alerts issued for other neutrino types, alerts for tau neutrinos could be issued to the astronomical community for follow-up studies.

All in all, this exciting discovery comes with the “intriguing possibility of leveraging tau neutrinos to uncover new physics,” said Cowen. 

+ info “Observation of Seven Astrophysical Tau Neutrino Candidates with IceCube,” The IceCube Collaboration: R. Abbasi et al. Accepted by Physical Review Letters. arxiv.org/abs/2403.02516

Original story by 

A Focused Approach Can Help Untangle Messy Quantum Scrambling Problems

The world is a cluttered, noisy place, and the ability to effectively focus is a valuable skill. For example, at a bustling party, the clatter of cutlery, the conversations, the music, the scratching of your shirt tag and almost everything else must fade into the background for you to focus on finding familiar faces or giving the person next to you your undivided attention. 

Similarly, nature and experiments are full of distractions and negligible interactions, so scientists need to deliberately focus their attention on sources of useful information. For instance, the temperature of the crowded party is the result of the energy carried by every molecule in the air, the air currents, the molecules in the air picking up heat as they bounce off the guests and numerous other interactions. But if you just want to measure how warm the room is, you are better off using a thermometer that will give you the average temperature of nearby particles rather than trying to detect and track everything happening from the atomic level on up. A few well-chosen features—like temperature and pressure—are often the key to making sense of a complex phenomenon.

It is especially valuable for researchers to focus their attention when working on quantum physics. Scientists have shown that quantum mechanics accurately describes small particles and their interactions, but the details often become overwhelming when researchers consider many interacting quantum particles. Applying the rules of quantum physics to just a few dozen particles is often more than any physicist—even using a supercomputer—can keep track of. So, in quantum research, scientists frequently need to identify essential features and determine how to use them to extract practical insights without being buried in an avalanche of details.A collection of quantum particles can store information in various collective quantum states. The above model represents the states as blue nodes and illustrates how interactions can scramble the organized information of initial states into a messy combination by mixing the options along the illustrated links. (Credit: Amit Vikram, UMD)A collection of quantum particles can store information in various collective quantum states. The above model represents the states as blue nodes and illustrates how interactions can scramble the organized information of initial states into a messy combination by mixing the options along the illustrated links. (Credit: Amit Vikram, UMD)

In a paper published in the journal Physical Review Letters in January 2024, Professor Victor Galitski and JQI graduate student Amit Vikram identified a new way that researchers can obtain useful insights into the way information associated with a configuration of particles gets dispersed and effectively lost over time. Their technique focuses on a single feature that describes how various amounts of energy can be held by different configurations a quantum system. The approach provides insight into how a collection of quantum particles can evolve without the researchers having to grapple with the intricacies of the interactions that make the system change over time.

This result grew out of a previous project where the pair proposed a definition of chaos for the quantum world. In that project, the pair worked with an equation describing the energy-time uncertainty relationship—the less popular cousin of the Heisenberg uncertainty principle for position and momentum. The Heisenberg uncertainty principle means there’s always a tradeoff between how accurately you can simultaneously know a quantum particle’s position and momentum. The tradeoff described by the energy-time uncertainty relationship is not as neatly defined as its cousin, so researchers must tailor its application to different contexts and be careful how they interpret it. But in general, the relationship means that knowing the energy of a quantum state more precisely increases how long it tends to take the state to shift to a new state.

When Galitski and Vikram were contemplating the energy-time uncertainty relationship they realized it naturally lent itself to studying changes in quantum systems—even those with many particles—without getting bogged down in too many details. Using the relationship, the pair developed an approach that uses just a single feature of a system to calculate how quickly the information contained in an initial collection of quantum particles can mix and diffuse.

The feature they built their method around is called the spectral form factor. It describes the energies that quantum physics allows a system to hold and how common they are—like a map that shows which energies are common and which are rare for a particular quantum system.

The contours of the map are the result of a defining feature of quantum physics—the fact that quantum particles can only be found in certain states with distinct—quantized—energies. And when quantum particles interact, the energy of the whole combination is also limited to certain discrete options. For most quantum systems, some of the allowed energies are only possible for a single combination of the particles, while other energies can result from many different combinations. The availability of the various energy configurations in a system profoundly shapes the resulting physics, making the spectral form factor a valuable tool for researchers.

Galitski and Vikram tailored a formulation of the energy time uncertainty relationship around the spectral form factor to develop their method. The approach naturally applies to the spread of information since information and energy are closely related in quantum physics. 

While studying this diffusion, Galitski and Vikram focused their attention on an open question in physics called the fast-scrambling conjecture, which aims to pin down how long it takes for the organization of an initial collection of particles to be scrambled—to have its information mixed and spread out among all interacting particles until it becomes effectively unrecoverable. The conjecture is not concerned just with the fastest scrambling that is possible for a single case, but instead, it is about how the time that the scrambling takes changes based on the size or complexity of the system. 

Information loss during quantum scrambling is similar to an ice sculpture melting. Suppose a sculptor spelled out the word “swan” in ice and then absentmindedly left it sitting in a tub of water on a sunny day. Initially, you can read the word at a glance. Later, the “s” has dropped onto its side and the top of the “a” has fallen off, making it look like a “u,” but you can still accurately guess what it once spelled. But, at some point, there’s just a puddle of water. It might still be cold, suggesting there was ice recently, but there’s no practical hope of figuring out if the ice was a lifelike swan sculpture, carved into the word “swan” or just a boring block of ice. 

How long the process takes depends on both the ice and the surroundings: Perhaps minutes for a small ice cube in a lake or an entire afternoon for a two-foot-tall centerpiece in a small puddle.

The ice sculpture is like the initial information contained in a portion of the quantum particles, and the surrounding water is all the other quantum particles they can interact with. But, unlike ice, each particle in the quantum world can simultaneously inhabit multiple states, called a quantum superposition, and can become inextricably linked together through quantum entanglement, which makes deducing the original state extra difficult after it has had the chance to change. 

For practical reasons, Galitski and Vikram designed their technique so that it applies to situations where researchers never know the exact states of all the interacting quantum particles. Their approach works for a range of cases spanning those where information is stored in a small chunk of all the interacting quantum particles to ones where the information is on a majority of particles—anything from an ice cube in a lake to a sculpture in a puddle. This gives the technique an advantage over previous approaches that only work for information stored on a few of the original particles.

Using the new technique, the pair can get insight into how long it takes a quantum message to effectively melt away for a wide variety of quantum situations. As long as they know the spectral form factor, they don’t need to know anything else. 

“It's always nice to be able to formulate statements that assume as little as possible, which means they're as general as possible within your basic assumptions,” says Vikram, who is the first author of the paper. “The neat little bonus right now is that the spectral form factor is a quantity that we can in principle measure.”

The ability of researchers to measure the spectral form factor will allow them to use the technique even when many details of the system are a mystery. If scientists don’t have enough details to mathematically derive the spectral form factor or to tailor a custom description of the particles and their interactions, a measured spectral form factor can still provide valuable insights. 

As an example of applying the technique, Galitski and Vikram looked at a quantum model of scrambling called the Sachdev-Ye-Kitaev (SYK) model. Some researchers believe there might be similarities between the SYK model and the way information is scrambled and lost when it falls into a black hole. 

Galitski and Vikram’s results revealed that the scrambling time became increasingly long as they looked at larger and larger numbers of particles instead of settling into conditions that scrambled as rapidly as possible. 

“Large collections of particles take a really long time to lose information into the rest of the system,” Vikram says. “That is something we can get in a very simple way without knowing anything about the structure of the SYK model, other than its energy spectrum. And it's related to things people have been thinking about simplified models for black holes. But the real inside of a black hole may turn out to be something completely different that no one's imagined.”

Galitski and Vikram are hoping future experiments will confirm their results, and they plan to continue looking for more ways to relate a general quantum feature to the resulting dynamics without relying on many specific details. They and their colleagues are also investigating properties of the spectral form factor that every system should satisfy and are working to identify constraints on scrambling that are universal for all quantum systems.

Original story by Bailey Bedford: https://jqi.umd.edu/news/focused-approach-can-help-untangle-messy-quantum-scrambling-problems

This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0001911.