UMD Physicists Advance NASA’s Mission to ‘Touch the Sun’

Those who say there’s “nothing new under the sun” must not know about NASA’s Parker Solar Probe mission. Since its launch in 2018, this spacecraft has been shedding new light on Earth’s sun—and University of Maryland physicists are behind many of its discoveries.

At its core, the Parker Solar Probe is “on a mission to touch the sun,” in NASA’s words. It endures extreme conditions while dipping in and out of the corona—the outermost layer of the sun’s atmosphere—to collect data on magnetic fields, plasma and energetic particles. The corona is at least 100 times hotter than the sun’s surface, but it’s no match for the spacecraft’s incredible speed and carbon composite shield, which can survive 2,500 degrees Fahrenheit. Last year, the spacecraft broke its own record for the fastest object ever made by humans. Parker Solar Probe (courtesy of NASA)Parker Solar Probe (courtesy of NASA)

This engineering feat was built to solve solar mysteries that have long confounded scientists: What makes the sun’s corona so much hotter than its surface, and what powers the sun’s supersonic wind? These questions aren’t just of interest to scientists, either. The solar wind, which carries plasma and part of the sun’s magnetic field, can cause geomagnetic storms capable of knocking out power grids on Earth or endangering astronauts in space.

To better understand these mechanisms, the Parker Solar Probe will attempt its deepest dive into the corona on December 24, 2024, with plans to come within 3.9 million miles of the sun’s surface. Researchers hope its findings will help them predict space weather with greater accuracy and frequency in the future.

James Drake, a Distinguished University Professor in UMD’s Department of Physics and Institute for Physical Science and Technology (IPST), is helping to move the needle closer to that goal as a member of the Parker Solar Probe research team.

“This mission is what's called a discovery mission, and with a discovery mission we can never be sure what we're going to find,” Drake said. “But of course, everybody is most excited about the data that will come from the Parker Solar Probe getting very close to the sun because that will reveal new information about the solar wind.” 

Reconnecting the dots

Drake and Marc Swisdak, a research scientist in UMD’s Institute for Research in Electronics & Applied Physics (IREAP), have been involved with this mission since its inception. The researchers were asked to join because of their expertise in magnetic reconnection, a process that occurs when magnetic fields pointing in opposite directions cross-connect, releasing large amounts of magnetic energy.

Before the Parker Solar Probe, it was known that magnetic reconnection could produce solar flares and coronal mass ejections that launch magnetic energy and plasma out into space. However, this mission revealed just how important magnetic reconnection is to so many other solar processes. 

Early Parker Solar Probe data showed that magnetic reconnection was happening frequently near the equatorial plane of the heliosphere, the giant magnetic bubble that surrounds the sun and all of the planets. More specifically, this activity was observed in the heliospheric current sheet, which divides sectors of the magnetic field that point toward and away from the sun. 

“That was a big surprise,” Drake said of their findings. “Every time the spacecraft crossed the heliospheric current sheet, we saw evidence for reconnection and the associated heating and energization of the ambient plasma.”

In 2021, the Parker Solar Probe made another unexpected discovery: the existence of switchbacks in the solar wind, which Drake described as “kinks in the magnetic field.” Characterized by sharp changes in the magnetic field’s direction, these switchbacks loosely trace the shape of the letter S.

“No one predicted the switchbacks—at least not the magnitude and number of them—when Parker launched,” Swisdak said. 

To explain this odd phenomenon, Drake, Swisdak and other collaborators theorized that switchbacks were produced by magnetic reconnection in the corona. While the exact origin of those switchbacks hasn’t been definitively solved, it prompted UMD’s team to take a closer look at magnetic reconnection, especially its role in driving the solar wind.

“The role of reconnection has gone from something that was not necessarily that significant at the beginning to a major component of the entire Parker Solar Probe mission,” Drake said. “Because of our group's expertise on the magnetic reconnection topic, we have played a central role in much of this work.”

Last year, Drake and Swisdak co-authored a study with other members of the Parker science team that explained how the sun’s fast wind—one of two types of solar wind—can surpass 1 million miles per hour. They once again saw that magnetic reconnection was responsible, specifically the kind that occurs between open and closed magnetic fields, known as interchange reconnection.

To test their theories about solar activity, the UMD team also uses computer simulations to try to reproduce Parker observations. 

“I think that one of the things that convinced people that magnetic reconnection was a major driver of the solar wind is that our computer simulations were able to produce the energetic particles that they saw in the Parker Solar Probe data,” Drake said. 

As part of his dissertation, physics Ph.D. student Zhiyu Yin built the simulation model that is used to see how particles might accelerate during magnetic reconnection.

“Magnetic reconnection is very important, and our simulation model can help us connect theory with observations,” Yin said. “I'm really honored to be part of the Parker Solar Probe mission and to contribute to its work, and I believe it could lead to even more discoveries about the physics of the sun, giving us the confidence to take on more projects in exploring the solar system and other astrophysical realms.”

Swisdak explained that simulations also help researchers push past the limitations of space probes.

“Observations are measuring something that is real, but they’re limited. Parker can only be in one place at one time, it has a limited lifetime and it’s also very hard to run reproducible experiments on it,” Swisdak said. “Computations have complementary advantages in that you can set up a simulation based on what Parker is observing, but then you can tweak the parameters to see the bigger picture of what we think is happening.”

‘Things no one has seen’

There are still unsolved mysteries, including the exact mechanisms that produce switchbacks and drive the solar wind, but researchers hope that the Parker Solar Probe will continue to answer these and other important questions. The sun is currently experiencing more intense solar flares and coronal mass ejections than usual, which could yield new and interesting data on the mechanisms that energize particles in these explosive events.

This research also has wider relevance. Studying the solar wind can help scientists understand other winds throughout the universe, including the powerful winds produced by black holes and rapidly rotating stars called pulsars. Winds can even offer clues about the habitability of planets because of their ability to deflect harmful cosmic rays, which are forms of radiation.

“One of the reasons why the solar wind is important is because it protects planetary bodies from these very energetic particles that are bouncing around the galaxy,” Drake said. “If we didn't have that solar wind protecting us, it's not totally clear whether the Earth would have been a habitable environment.”

As the spacecraft prepares for its December descent into the sun, the UMD team is eager to see what the new observations will reveal.

“One of the nice things about being involved with this mission is that it’s a chance to make observations of things that no one has seen before. It lets you go into a new regime of space and say, ‘Alright, we thought things would look this way, and inevitably they don't,’” Swisdak said. “The ability to get close enough to the sun to see where the solar wind starts and where coronal mass ejections begin—and being able to take direct measurements of those phenomena—is really exciting.”

How Does Quantum Mechanics Meet Up With Classical Physics?

In physics, there is a deep disparity between the quantum and classical perspective on physical laws. Classical mechanics is used to describe the familiar world around us. This is the physics that you may have been exposed to in high school or early college where you calculate the trajectory of a baseball or speed of a car.  Quantum mechanics on the other hand is primarily used to describe incredibly small objects that are on sub-micron length scales such as electrons or atoms. Quantum mechanics is typically far from intuitive and is home to a variety of mind-bending phenomena like quantum tunneling and entanglement.  The differences between classical mechanics and quantum mechanics are quite striking.Schematic of the Aharonov-Bohm mesoscopic device connected to two electron reservoirs.  The device is biased by a magnetic flux and contains a “dephasing” trapping site. Schematic of the Aharonov-Bohm mesoscopic device connected to two electron reservoirs. The device is biased by a magnetic flux and contains a “dephasing” trapping site.

Everyday processes are governed by equations of motion that include friction, which creates the phenomenon of irreversibility, which we all take for granted.  Irreversibility becomes clear when we take a movie of an egg falling onto a solid surface and cracking open.  When the movie is run backward, we can tell that it is obviously “wrong” because broken eggs don’t spontaneously re-assemble and then jump up to the original location above the surface.  We say that irreversibility creates the perception of the “arrow of time.”  However, in quantum mechanics there is no “arrow of time” because all microscopic processes are fully irreversible – in other words in the microscopic world everything is the same for time running forward or backward.  The natural question to ask is then: how do the laws of quantum mechanics segue into those of classical mechanics as you involve increasing numbers of interacting particles and influences?

Semiclassical physics aims to bridge this disparity by exploring the regime between pure quantum evolution and classical physics. By introducing the corrupting influence of “dephasing”, one can disrupt the symmetric forward/backward time evolution and recover some degree of classical behavior from a quantum system, such as an electron travelling through a metal.  Of particular interest is whether this (typically undesired) “de-phasing” effect creates opportunities for new technologies that can perform tasks that are impossible in either the fully quantum or fully classical limits.

The mechanism of “dephasing”, the way a quantum system is pushed towards being classical, is then of great importance and needs to be understood.  In a recent experiment performed at the University of Maryland, it was found that one current theoretical treatment of “dephasing” effectively renders the model system classical, suggesting that more nuanced notions are required to understand what happens in this interesting semiclassical regime.

Photograph of the Aharonov-Bohm-graph microwave analogue made up of coaxial cables, circulators (small boxes), phase trimmers, and attenuators (large boxes). Photograph of the Aharonov-Bohm-graph microwave analogue made up of coaxial cables, circulators (small boxes), phase trimmers, and attenuators (large boxes). One hypothetical technology proposed to take advantage of this regime is a two-lead mesoscopic (i.e. really small) electrical device which would have a net charge current flowing through it in the absence of a potential difference without the use of a superconductor, in apparent violation of the second law of thermodynamics, also known as the law of no free lunch. The device in question is an Aharonov-Bohm (AB) ring with two electrical leads, shown in Fig. 1, which could be connected to large reservoirs of electrons. By tailoring the quantum properties of the ring one can create a situation in which electron waves that enter the ring at lead 1 only traverse the ring one time before they exit at lead 2, while the electron waves which start at lead 2 must traverse the ring three times before they can exit at lead 1. A localized “dephasing” center can be thought of as a trapping site that grabs a passing electron and holds on to it for a random amount of time before releasing it, having erased any information about where the electron came from or where it was going.  The released electron is then equally likely to exit the device through either lead.  Since the site will act preferentially on the longer lingering electrons, it would cause more electrons to travel from 1 to 2 than from 2 to 1, resulting in a net electrical current through the device with no external work being done!

The team at UMD has performed an experiment to address certain aspects of this provocative proposal. Though the experiment is fully classical, the team successfully established the transmission time imbalance using wave interference properties.  The UMD researchers made use of their recently developed concept of complex time delay to create a microwave circuit that had the necessary ingredients to mimic the asymmetric transmission-time properties of the hypothetical device.  This device is considered to be “classical” because it’s about the size of two human hands, in contrast to the originally proposed semiclassical device which would be the size of a few molecules. The device is a microwave circuit in the shape of a ring made mainly out of coaxial cables (see Fig. 2). The UMD researchers send microwave light pulses through the device to mimic electrons.  This analogue allows them to probe certain aspects of this provocative proposal and test their viability. 

Since they are working with a classical analogue they were limited in their ability to recreate the trapping site.  The researchers crudely attempted to mimic a quantum “dephasing” site by using a microwave attenuator. An attenuator works by reducing the energy (amplitude) of the microwave pulse and basically functions as a source of friction for the pulses.  The circuit was carefully studied and subjected to every kind of input the researchers could throw at it: frequency domain continuous waves, time domain pulses, and even broadband noise.Comparison of the Aharonov-Bohm-graph microwave analogue asymmetric transmission (purple diamonds and lines, P_21-P_12 on left axis) and simulated mesoscopic device transmission probability asymmetry (black circles, P_21-P_12 on right axis), as a function of microwave dissipation (Γ_A/2) in Nepers, and quantum “dephasing rate” (average number of inelastic scattering events per electron passage), on a common log scale. Comparison of the Aharonov-Bohm-graph microwave analogue asymmetric transmission (purple diamonds and lines, P_21-P_12 on left axis) and simulated mesoscopic device transmission probability asymmetry (black circles, P_21-P_12 on right axis), as a function of microwave dissipation (Γ_A/2) in Nepers, and quantum “dephasing rate” (average number of inelastic scattering events per electron passage), on a common log scale.

The experiment does indeed show an imbalance in the transmission probability through the classical analog microwave device.  Further, the UMD scientists find remarkably similar transmission imbalance as a function of the classical rate of imitated “dephasing” as quantum simulations show on the electron “dephasing” rate in a numerical simulation in the literature, see Fig. 3. These results suggest that the utilized treatment of “dephasing” does not adequately capture the quantum nature of the system, as the predicted effects can be seen in a purely classical system.  The team concludes that more sophisticated theoretical notions are required to understand what happens in the transition between pure quantum and classical physics.  Nevertheless, there seems to be unique opportunities to study new physics and technologies in quantum systems that interact with external degrees of freedom.

The experiments were done by graduate students Lei Chen, Isabella Giovannelli, and Nadav Shaibe in the laboratory of Prof. Steven Anlage in the Quantum Materials Center in the Physics Department at the University of Maryland.  Their paper is now published in Physical Review B (https://doi.org/10.1103/PhysRevB.110.045103).

LZ Experiment Sets New Record in Search for Dark Matter

Figuring out the nature of dark matter, the invisible substance that makes up most of the mass in our universe, is one of the greatest puzzles in physics. New results from the world’s most sensitive dark matter detector, LUX-ZEPLIN (LZ), have narrowed down possibilities for one of the leading dark matter candidates: weakly interacting massive particles, or WIMPs. 

LZ, led by the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), hunts for dark matter from a cavern nearly one mile underground at the Sanford Underground Research Facility in South Dakota. The experiment’s new results explore weaker dark matter interactions than ever searched before and further limit what WIMPs could be. UMD faculty Carter Hall and Anwar Bhatti contributed to the new results, along with Maryland graduate students John Armstrong, Eli Mizrachi, Ethan Ritchey, Bramwell Shafer, and Donghee Yeum. LZ’s central detector, the time projection chamber, in a surface lab clean room before delivery underground. Credit: Matthew Kapust/Sanford Underground Research Facility LZ’s central detector, the time projection chamber, in a surface lab clean room before delivery underground. Credit: Matthew Kapust/Sanford Underground Research Facility

“These are new world-leading constraints by a sizable margin on dark matter and WIMPs,” said Chamkaur Ghag, spokesperson for LZ and a professor at University College London (UCL). He noted that the detector and analysis techniques are performing even better than the collaboration expected. “If WIMPs had been within the region we searched, we’d have been able to robustly say something about them. We know we have the sensitivity and tools to see whether they’re there as we search lower energies and accrue the bulk of this experiment’s lifetime.” 

The collaboration found no evidence of WIMPs above a mass of 9 gigaelectronvolts/c2 (GeV/c2). (For comparison, the mass of a proton is slightly less than 1 GeV/c2.) The experiment’s sensitivity to faint interactions helps researchers reject potential WIMP dark matter models that don’t fit the data, leaving significantly fewer places for WIMPs to hide. The new results were presented at two physics conferences on August 26: TeV Particle Astrophysics 2024 in Chicago, Illinois, and LIDINE 2024 in São Paulo, Brazil. A scientific paper will be published in the coming weeks.

The results analyze 280 days’ worth of data: a new set of 220 days (collected between March 2023 and April 2024) combined with 60 earlier days from LZ’s first run. The experiment plans to collect 1,000 days’ worth of data before it ends in 2028.

“If you think of the search for dark matter like looking for buried treasure, we’ve dug almost five times deeper than anyone else has in the past,” said Scott Kravitz, LZ’s deputy physics coordinator and a professor at the University of Texas at Austin. “That’s something you don’t do with a million shovels – you do it by inventing a new tool.”

LZ’s sensitivity comes from the myriad ways the detector can reduce backgrounds, the false signals that can impersonate or hide a dark matter interaction. Deep underground, the detector is shielded from cosmic rays coming from space. To reduce natural radiation from everyday objects, LZ was built from thousands of ultraclean, low-radiation parts. The detector is built like an onion, with each layer either blocking outside radiation or tracking particle interactions to rule out dark matter mimics. And sophisticated new analysis techniques help rule out background interactions, particularly those from the most common culprit: radon.

This result is also the first time that LZ has applied “salting” – a technique that adds fake WIMP signals during data collection. By camouflaging the real data until “unsalting” at the very end, researchers can avoid unconscious bias and keep from overly interpreting or changing their analysis.

“We’re pushing the boundary into a regime where people have not looked for dark matter before,” said Scott Haselschwardt, the LZ physics coordinator and a recent Chamberlain Fellow at Berkeley Lab who is now an assistant professor at the University of Michigan. “There’s a human tendency to want to see patterns in data, so it’s really important when you enter this new regime that no bias wanders in. If you make a discovery, you want to get it right.”

 Members of the LZ collaboration gather at the Sanford Underground Research Facility in June 2023, shortly after the experiment began the recent science run. (Credit: Stephen Kenny/Sanford Underground Research Facility) Members of the LZ collaboration gather at the Sanford Underground Research Facility in June 2023, shortly after the experiment began the recent science run. (Credit: Stephen Kenny/Sanford Underground Research Facility)Dark matter, so named because it does not emit, reflect, or absorb light, is estimated to make up 85% of the mass in the universe but has never been directly detected, though it has left its fingerprints on multiple astronomical observations. We wouldn’t exist without this mysterious yet fundamental piece of the universe; dark matter’s mass contributes to the gravitational attraction that helps galaxies form and stay together.

LZ uses 10 tonnes of liquid xenon to provide a dense, transparent material for dark matter particles to potentially bump into. The hope is for a WIMP to knock into a xenon nucleus, causing it to move, much like a hit from a cue ball in a game of pool. By collecting the light and electrons emitted during interactions, LZ captures potential WIMP signals alongside other data.

“We’ve demonstrated how strong we are as a WIMP search machine, and we’re going to keep running and getting even better – but there’s lots of other things we can do with this detector,” said Amy Cottle, lead on the WIMP search effort and an assistant professor at UCL. “The next stage is using these data to look at other interesting and rare physics processes, like rare decays of xenon atoms, neutrinoless double beta decay, boron-8 neutrinos from the sun, and other beyond-the-Standard-Model physics. And this is in addition to probing some of the most interesting and previously inaccessible dark matter models from the last 20 years.”

LZ is a collaboration of roughly 250 scientists and engineers from 38 institutions in the United States, United Kingdom, Portugal, Switzerland, South Korea, and Australia; much of the work building, operating, and analyzing the record-setting experiment is done by early career researchers. The collaboration is already looking forward to analyzing the next data set and using new analysis tricks to look for even lower-mass dark matter. Scientists are also thinking through potential upgrades to further improve LZ, and planning for a next-generation dark matter detector called XLZD.

“Our ability to search for dark matter is improving at a rate faster than Moore’s Law,” Kravitz said. “If you look at an exponential curve, everything before now is nothing. Just wait until you see what comes next.”

Original story: https://newscenter.lbl.gov/2024/08/26/lz-experiment-sets-new-record-in-search-for-dark-matter/

LZ is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility. LZ is also supported by the Science & Technology Facilities Council of the United Kingdom; the Portuguese Foundation for Science and Technology; the Swiss National Science Foundation, and the Institute for Basic Science, Korea. Over 38 institutions of higher education and advanced research provided support to LZ. The LZ collaboration acknowledges the assistance of the Sanford Underground Research Facility.

Particle Physics and Quantum Simulation Collide in New Proposal

Quantum particles have unique properties that make them powerful tools, but those very same properties can be the bane of researchers. Each quantum particle can inhabit a combination of multiple possibilities, called a quantum superposition, and together they can form intricate webs of connection through quantum entanglement.

These phenomena are the main ingredients of quantum computers, but they also often make it almost impossible to use traditional tools to track a collection of strongly interacting quantum particles for very long. Both human brains and supercomputers, which each operate using non-quantum building blocks, are easily overwhelmed by the rapid proliferation of the resulting interwoven quantum possibilities. A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)

In nuclear and particle physics, as well as many other areas, the challenges involved in determining the fate of quantum interactions and following the trajectories of particles often hinder research or force scientists to rely heavily on approximations. To counter this, researchers are actively inventing techniques and developing novel computers and simulations that promise to harness the properties of quantum particles in order to provide a clearer window into the quantum world.

Zohreh Davoudi, an associate professor of physics at the University of Maryland and Maryland Center for Fundamental Physics, is working to ensure that the relevant problems in her fields of nuclear and particle physics don’t get overlooked and are instead poised to reap the benefits when quantum simulations mature. To pursue that goal, Davoudi and members of her group are combining their insights into nuclear and particle physics with the expertise of colleagues—like Adjunct Professor Alexey Gorshkov and Ron Belyansky, a former JQI graduate student under Gorshkov and a current postdoctoral associate at the University of Chicago—who are familiar with the theories that quantum technologies are built upon. 

In an article published earlier this year in the journal Physical Review Letters, Belyansky, who is the first author of the paper, together with Davoudi, Gorshkov and their colleagues, proposed a quantum simulation that might be possible to implement soon. They propose using superconducting circuits to simulate a simplified model of collisions between fundamental particles called quarks and mesons (which are themselves made of quarks and antiquarks). In the paper, the group presented the simulation method and discussed what insights the simulations might provide about the creation of particles during energetic collisions. 

Particle collisions—like those at the Large Hadron Collider—break particles into their constituent pieces and release energy that can form new particles. These energetic experiments that spawn new particles are essential to uncovering the basic building blocks of our universe and understanding how they fit together to form everything that exists. When researchers interpret the messy aftermath of collision experiments, they generally rely on simulations to figure out how the experimental data matches the various theories developed by particle physicists.

Quantum simulations are still in their infancy. The team’s proposal is an initial effort that simplifies things by avoiding the complexity of three-dimensional reality, and it represents an early step on the long journey toward quantum simulations that can tackle the most realistic fundamental theories that Davoudi and other particle physicists are most eager to explore. The diverse insights of many theorists and experimentalists must come together and build on each other before quantum simulations will be mature enough to tackle challenging problems, like following the evolution of matter after highly energetic collisions.

“We, as theorists, try to come up with ideas and proposals that not only are interesting from the perspective of applications but also from the perspective of giving experimentalists the motivation to go to the next level and push to add more capabilities to the hardware,” says Davoudi, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a Senior Investigator at the Institute for Robust Quantum Simulation (RQS).“There was a lot of back and forth regarding which model and which platform. We learned a lot in the process; we explored many different routes.” 

A Quantum Solution to a Quantum Problem

The meetings with Davoudi and her group brought particle physics concepts to Belyansky’s attention. Those ideas were bouncing around inside his head when he came across a mathematical tool that allows physicists to translate a model into a language where particle behaviors look fundamentally different. The ideas collided and crystallized into a possible method to efficiently simulate a simple particle physics model, called the Schwinger model. The key was getting the model into a form that could be efficiently represented on a particular quantum device. 

Belyansky had stumbled upon a tool for mapping between certain theories that describe fermions and theories that describe bosons. Every fundamental quantum particle is either a fermion or boson, and whether a particle is one or the other governs how it behaves. If a particle is a fermion, like protons, quarks and electrons, then no two of that type of particle can ever share the same quantum state. In contrast, bosons, like the mesons formed by quarks, are willing to share the same state with any number of their identical brethren. Switching between two descriptions of a theory can provide researchers with entirely new tools for tackling a problem.

Based on Belyansky’s insight, the group determined that translating the fermion-based description of the Schwinger model into the language of bosons could be useful for simulating quark and meson collisions. The translation put the model into a form that more naturally mapped onto the technology of circuit quantum electrodynamics (QED). Circuit QED uses light trapped in superconducting circuits to create artificial atoms, which can be used as the building blocks of quantum computers and quantum simulations. The pieces of a circuit can combine to behave like a boson, and the group mapped the boson behavior onto the behavior of quarks and mesons during collisions.

This type of simulation that uses a device’s natural behaviors to directly mimic a behavior of interest is called an analog simulation. This approach is generally more efficient than designing simulations to be compatible with diverse quantum computers. And since analog approaches lean into the underlying technology’s natural behavior, they can play to the strengths of early quantum devices. In the paper, the team described how their analog simulation could run on a relatively simple quantum device without relying on many approximations.

"It is particularly exciting to contribute to the development of analog quantum simulators—like the one we propose—since they are likely to be among the first truly useful applications of quantum computers," says Gorshkov, who is also a Physicist at the National Institute of Standards and Technology, a QuICS Fellow and an RQS Senior Investigator.

The translation technique Belyansky and his collaborators used has a limitation: It only works in one space dimension. The restriction to one dimension means that the model is unable to replicate real experiments, but it also makes things much simpler and provides a more practical goal for early quantum simulations. Physicists call this sort of simplified case a toy model. The team decided this one-dimensional model was worth studying because its description of the force that binds quarks into mesons—the strong force—still shares features with how it behaves in three space dimensions.

“Playing around with these toy models and being able to actually see the outcome of these quantum mechanical collision processes would give us some insight as to what might go on in actual strong force processes and may lead to a prediction for experiments,” Davoudi says. “That's sort of the beauty of it.” 

Scouting Ahead with Current Computers 

The researchers did more than lay out a proposal for experimentally implementing their simulations using quantum technology. By focusing on the model under restrictions, like limiting the collision energy, they simplified the calculations enough to explore certain scenarios using a regular computer without any quantum advantages.

Even with the imposed limitations, the simplified model was still able to simulate more than the most basic collisions. Some of the simulations describe collisions that spawned new particles instead of merely featuring the initial quarks and mesons bouncing around without anything new popping up. The creation of particles during collisions is an important feature that prior simulation methods fell short of capturing.

These results help illustrate the potential of the approach to provide insights into how particle collisions produce new particles. While similar simulation techniques that don’t harness quantum power will always be limited, they will remain useful for future quantum research: Researchers can use them in identifying which quantum simulations have the most potential and in confirming if a quantum simulation is performing as expected.

Continuing the Journey

There is still a lot of work to be done before Davoudi and her collaborators can achieve their goal of simulating more realistic models in nuclear and particle physics. Belyansky says that both one-dimensional toy models and the tools they used in this project will likely deliver more results moving forward.

“To get to the ultimate goal, we need to add more ingredients,” Belyansky says. “Adding more dimensions is difficult, but even in one dimension, we can make things more complicated. And on the experimental side, people need to build these things.”

For her part, Davoudi is continuing to collaborate with several research groups to develop quantum simulations for nuclear and particle physics research. 

“I'm excited to continue this kind of multidisciplinary collaboration, where I learn about these simpler, more experimentally feasible models that have features in common with theories of interest in my field and to try to see whether we can achieve the goal of realizing them in quantum simulators,” Davoudi says. “I'm hoping that this continues, that we don't stop here.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/particle-physics-and-quantum-simulation-collide-new-proposal

 

New Photonic Chip Spawns Nested Topological Frequency Comb

Scientists on the hunt for compact and robust sources of multicolored laser light have generated the first topological frequency comb. Their result, which relies on a small silicon nitride chip patterned with hundreds of microscopic rings, will appear in the June 21, 2024 issue of the journal Science.

Light from an ordinary laser shines with a single, sharply defined color—or, equivalently, a single frequency. A frequency comb is like a souped-up laser, but instead of emitting a single frequency of light, a frequency comb shines with many pristine, evenly spaced frequency spikes. The even spacing between the spikes resembles the teeth of a comb, which lends the frequency comb its name.

A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)The earliest frequency combs required bulky equipment to create. More recently researchers have been focused on miniaturizing them into integrated, chip-based platforms. Despite big improvements in shrinking the equipment needed to generate frequency combs, the fundamental ideas haven’t changed. Creating a useful frequency comb requires a stable source of light and a way to disperse that light into the teeth of the comb by taking advantage of optical gain, loss and other effects that emerge when the source of light gets more intense.

In the new work, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and physics at the University of Maryland (UMD), JQI Fellow Kartik Srinivasan, who is also a Fellow of the National Institute of Standards and Technology, and several colleagues have combined two lines of research into a new method for generating frequency combs. One line is attempting to miniaturize the creation of frequency combs using microscopic resonator rings fabricated out of semiconductors. The second involves topological photonics, which uses patterns of repeating structures to create pathways for light that are immune to small imperfections in fabrication.

“The world of frequency combs is exploding in single-ring integrated systems,” says Chris Flower, a graduate student at JQI and the UMD Department of Physics and the lead author of the new paper. “Our idea was essentially, could similar physics be realized in a special lattice of hundreds of coupled rings? It was a pretty major escalation in the complexity of the system.”

By designing a chip with hundreds of resonator rings arranged in a two-dimensional grid, Flower and his colleagues engineered a complex pattern of interference that takes input laser light and circulates it around the edge of the chip while the material of the chip itself splits it up into many frequencies. In the experiment, the researchers took snapshots of the light from above the chip and showed that it was, in fact, circulating around the edge. They also siphoned out some of the light to perform a high-resolution analysis of its frequencies, demonstrating that the circulating light had the structure of a frequency comb twice over. They found one comb with relatively broad teeth and, nestled within each tooth, they found a smaller comb hiding.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.

Although this nested comb is only a proof of concept at the moment—its teeth aren’t quite evenly spaced and they are a bit too noisy to be called pristine—the new device could ultimately lead to smaller and more efficient frequency comb equipment that can be used in atomic clocks, rangefinding detectors, quantum sensors and many other tasks that call for accurate measurements of light. The well-defined spacing between spikes in an ideal frequency comb makes them excellent tools for these measurements. Just as the evenly spaced lines on a ruler provide a way to measure distance, the evenly spaced spikes of a frequency comb allow the measurement of unknown frequencies of light. Mixing a frequency comb with another light source produces a new signal that can reveal the frequencies present in the second source.

Repetition Breeds Repetition

At least qualitatively, the repeating pattern of microscopic ring resonators on the new chip begets the pattern of frequency spikes that circulate around its edge.

Individually, the microrings form tiny little cells that allow photons—the quantum particles of light—to hop from ring to ring. The shape and size of the microrings were carefully chosen to create just the right kind of interference between different hopping paths, and, taken together, the individual rings form a super-ring. Collectively all the rings disperse the input light into the many teeth of the comb and guide them along the edge of the grid.

The microrings and the larger super-ring provide the system with two different time and length scales, since it takes light longer to travel around the larger super-ring than any of the smaller microrings. This ultimately leads to the generation of the two nested frequency combs: One is a coarse comb produced by the smaller microrings, with frequency spikes spaced widely apart. Within each of those coarsely spaced spikes lives a finer comb, produced by the super-ring. The authors say that this nested comb-within-a-comb structure, reminiscent of Russian nesting dolls, could be useful in applications that require precise measurements of two different frequencies that happen to be separated by a wide gap.

Getting Things Right

It took more than four years for the experiment to come together, a problem exacerbated by the fact that only one company in the world could make the chips that the team had designed.

Early chip samples had microrings that were too thick with bends that were too sharp. Once input light passed through these rings, it would scatter in all kinds of unwanted ways, washing out any hope of generating a frequency comb. “The first generation of chips didn’t work at all because of this,” Flower says. Returning to the design, he trimmed down the ring width and rounded out the corners, ultimately landing on a third generation of chips that were delivered in mid-2022.

While iterating on the chip design, Flower and his colleagues also discovered that it would be difficult to deliver enough laser power into the chip. In order for their chip to work, the intensity of the input light needed to exceed a threshold—otherwise no frequency comb would form. Normally they would have reached for a commercial CW laser, which delivers a continuous beam of light. But those lasers delivered too much heat to the chip, causing them to burn out or swell and become misaligned with the light source. They needed to concentrate the energy in bursts to deal with these thermal issues, so they pivoted to a pulsed laser that delivers its energy in a fraction of a second.

But that introduced its own problems: Off-the-shelf pulsed lasers had pulses that were too short and contained too many frequencies. They tended to introduce a jumble of unwanted light—both on the edge of the chip and through its middle—instead of the particular edge-constrained light that the chip was designed to disperse into a frequency comb. Due to the long lead time and expense involved in getting new chips, the team needed to make sure they found a laser that balanced peak power delivery with longer duration, tunable pulses.

“I sent out emails to basically every laser company,” Flower says. “I searched to find somebody who would make me a custom tunable and long-pulse-duration laser. Most people said they don't make that, and they're too busy to do custom lasers. But one company in France got back to me and said, ‘We can do that. Let's talk.’”

His persistence paid off, and, after a couple shipments back and forth from France to install a beefier cooling system for the new laser, the team finally sent the right kind of light into their chip and saw a nested frequency comb come out.

The team says that while their experiment is specific to a chip made from silicon nitride, the design could easily be translated to other photonic materials that could create combs in different frequency bands. They also consider their chip the introduction of a new platform for studying topological photonics, especially in applications where a threshold exists between relatively predictable behavior and more complex effects—like the generation of a frequency comb.

Original story by Chris Cesare: https://jqi.umd.edu/news/new-photonic-chip-spawns-nested-topological-frequency-comb

In addition to Hafezi, Srinivasan and Flower, there were eight other authors of the new paper: Mahmoud Jalali Mehrabad, a postdoctoral researcher at JQI; Lida Xu, a graduate student at JQI; Grégory Moille, an assistant research scientist at JQI; Daniel G. Suarez-Forero, a postdoctoral researcher at JQI; Oğulcan Örsel, a graduate student at the University of Illinois at Urbana-Champaign (UIUC); Gaurav Bahl, a professor of mechanical science and engineering at UIUC; Yanne Chembo, a professor of electrical and computer engineering at UMD and the director of the Institute for Research in Electronics and Applied Physics; and Sunil Mittal, an assistant professor of electrical and computer engineering at Northeastern University and a former postdoctoral researcher at JQI.

This work was supported by the Air Force Office of Scientific Research (FA9550-22-1-0339), the Office of Naval Research (N00014-20-1-2325), the Army Research Laboratory (W911NF1920181), the National Science Foundation (DMR-2019444), and the Minta Martin and Simons Foundations.