When Superfluids Collide, Physicists Find a Mix of Old and New

Physics is often about recognizing patterns, sometimes repeated across vastly different scales. For instance, moons orbit planets in the same way planets orbit stars, which in turn orbit the center of a galaxy.

When researchers first studied the structure of atoms, they were tempted to extend this pattern down to smaller scales and describe electrons as orbiting the nuclei of atoms. This is true to an extent, but the quirks of quantum physics mean that the pattern breaks in significant ways. An electron remains in a defined orbital area around the nucleus, but unlike a classical orbit, an electron will be found at a random location in the area instead of proceeding along a precisely predictable path.

That electron orbits bear any similarity to the orbits of moons or planets is because all of these orbital systems feature attractive forces that pull the objects together. But a discrepancy arises for electrons because of their quantum nature. Similarly, superfluids—a quantum state of matter—have a dual nature, and to understand them, researchers have had to pin down when they follow the old rules of regular fluids and when they play by their own quantum rules. For instance, superfluids will fill the shape of a container like normal fluids, but their quantum nature lets them escape by climbing vertical walls. Most strikingly, they flow without any friction, which means they can spin endlessly once stirred up.A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)

JQI Fellows Ian Spielman and Gretchen Campbell and their colleagues have been investigating the rich variety of quantum behaviors present in superfluids and exploring ways to utilize them. In a set of recent experiments, they mixed together two superfluids and stumbled upon some unexpected patterns that were familiar from normal fluids. In an article published in Aug. 2025 in the journal Science Advances, the team described the patterns they saw in their experiments, which mirrored the ripples and mushroom clouds that commonly occur when two ordinary fluids with different densities meet.

The team studies a type of superfluid called a Bose-Einstein condensate (BEC). BECs form by cooling many particles down so cold that they all collect into a single quantum state. That consolidation lets all the atoms coordinate and allows the quirks of quantum physics to play out at a much larger scale than is common in nature. The particular BEC they used could easily be separated into two superfluids that provide a convenient way for the team to prepare nearly smooth interfaces, which were useful for seeing mixing patterns balloon from the tiniest seeds of imperfection into a turbulent mess. And the researchers didn’t only find classical fluid behaviors in the quantum world; they also spied the quantum fingerprints hidden beneath the surface. Using the uniquely quantum features of their experiment, they developed a new technique for observing currents along the interface of two superfluids.

“It was really exciting to see how the behavior of normal liquids played out for superfluids, and to invent a new measurement technique leveraging their uniquely quantum behavior,” Spielman says.

To make the two superfluid BECs in the new experiment, the researchers used sodium atoms. Each sodium atom has a spin, a quantum property that makes it act like a little magnet that can either point with or against a magnetic field. Hitting the cooled down cloud of sodium atoms with microwaves produces roughly equal numbers of atoms with spins pointing in opposite directions, which forms two BECs with distinct behaviors. In an uneven magnetic field, the cloud of the two intermingled BECs formed by the microwave pulse will sort itself into two adjacent clouds, with one effectively floating on top of the other; adjusting the field can make the superfluids move around.

This process was old hat in the lab, but, together with a little happenstance, it inspired the new experiment. JQI graduate student Yanda Geng, who is the lead author of the paper, was initially working on another project that required him to smooth out variations of the magnetic field in his setup. To test for magnetic fluctuations, Geng would routinely turn his cloud of atoms into the two BECs and take a snapshot of their distribution. The resulting images caught the eye of JQI postdoctoral researcher Mingshu Zhao, who at the time was working on his own project about turbulence in superfluids. Zhao, who is also an author of the paper, thought that the swirling patterns in the superfluids were reminiscent of turbulence in normal fluids. The snapshots from the calibration didn’t clearly show mushroom clouds, but something about the way the two BECs mixed seemed familiar.

“This is what you call serendipity,” Geng says. “And if you have somebody in the lab who knows what could have happened, they immediately could say, ‘Oh, that's something interesting and probably worth pursuing scientifically.’”

The hints kept appearing as Geng’s original experiment repeatedly hit roadblocks. After months of working on the project, he felt like he was banging his head against a wall. One weekend, another colleague, JQI postdoctoral researcher Junheng Tao, encouraged Geng to mix things up and spend some time exploring the hints of turbulence. Tao, who is also an author of the paper, suggested they intentionally create the two fluids in a stable state and check if they could see patterns forming before the turbulence erupted.

“It was a Sunday, we went into the lab, and we just casually put in some numbers and programmed the experiment, and bam, you see the signal,” Geng says.

The magnetic responses of the two BECs gave Geng and Tao a convenient way to control the superfluids. First, they let magnetism pull the two BECs into a stable configuration in which they lie flush against each other, like oil floating on water. Then, by reversing the way the magnetic field varied across the experiment, the BECs were suddenly pulled in the opposite direction, instantly producing the equivalent of water balanced on top of oil.

After adjusting the field, Geng and Tao were able to take just a single snapshot of the mixing BECs. To get the image, they relied on the fact that the BECs naturally absorb different colors of light. They flashed a color that interacted with just one of the BECs, so they could identify each BEC based on where the light was absorbed. Inconveniently, absorbing the light knocked many atoms out of the BECs, so snapping the image ended the run of the experiment.

By waiting different amounts of time each run, they were able to piece together what was happening as the two BECs mixed. The results revealed the distinctive formation of mushroom clouds that ultimately degenerated into messy turbulence. The researchers determined that despite the many stark differences between BEC superfluids and classical fluids, the BECs recreated a widespread effect, called the Rayleigh-Taylor instability, that is found in normal fluids.

The Rayleigh-Taylor instability describes the process of two distinct fluids needing to exchange places, such as when a dense gas or liquid is on top of a lighter one with gravity pulling it down. The instability produces a pattern of growth of small imperfections in an almost stable state that devolves into unpredictable turbulent mixing. It occurs for water on top of oil, cool dense air over hotter air (as happens after a big explosion) and when layers of material explode out from a star during a supernova. The instability contributes to the iconic “mushroom clouds” observed in the air layers moving above explosions, and similar shapes were found in the BEC.

“At first it's really mind-boggling,” Geng says. “How can it happen here? They’re just completely different things.”

With a little more work, they confirmed they could reliably recreate the behavior and showed that the superfluids in the experiment had all the necessary ingredients to produce the instability. In the experiment, the researchers had effectively substituted magnetism into the role gravity often plays in the creation of the Rayleigh-Taylor instability. This made it convenient to flip the direction of the force at a whim, which made it easy to begin with a calm interface between the fluids and observe the instability balloon from the tiniest seeds of imperfection into turbulent mixing.

The initial result prompted the group to follow up on the project with another experiment exploring a more stable effect at the interface. Instead of completely flipping the force, they kept the “lighter” BEC on top—like oil, or even air, resting on water. By continuously varying the magnetic field at a particular rate, they could shake the interface and create the equivalent of ripples on the surface of a pond. Since the atoms in each BEC all share a quantum state, the ripples have quantum properties and can behave like particles (called ripplons).

But despite the clear patterns resembling mushroom clouds and ripples of normal fluids, the quantum nature of the BECs was still present throughout the experiment. After seeing the familiar behaviors, Geng began to think about the quantum side of the superfluids and turned his attention to something that is normally challenging to do with BECs—measuring the velocity of currents flowing through them.

Geng and his colleagues used the fact that the velocity of a BEC is tied toits phase—a wavelike feature of every quantum state. The phase of a single quantum object is normally invisible, but when multiple phases interact, they can influence what researchers see in experiments. Like waves, if two phases are both at a peak when they meet, they combine, but if a peak meets a trough, they instead cancel out. Or circumstances can produce any of the intermediate forms of combining or partially cancelling out. When different interactions occur at different positions, they create patterns that are often visible in experiments. Geng realized that at the interfaces in his experiment the wavefunctions of the two BECs met and gave them a unique chance to observe interfering BEC phases and determine the velocities of the currents flowing along the interface. 

When the two BECs came together in their experiments, their phases interfered, but the resulting interference pattern remained hidden. However, Geng knew how to translate the hidden interference pattern to something he could see. Hitting the BECs with a microwave pulse could push the sodium atoms into new states where the pattern could be experimentally observed. With that translation, Geng could use his normal snapshot technique to capture an image of the interference between the two phases.

The quantum patterns he saw provide an additional tool for understanding the mixing of superfluids and demonstrate how the familiar Rayleigh-Taylor instability pattern found in the experiment had quantum patterns hidden beneath the surface. The results revealed that despite BEC superfluids being immersed in the quantum world, researchers can still benefit from keeping an eye out for the old patterns familiar from research on ordinary fluids.

“I think it's a very amazing thing for physicists to see the same phenomenon manifest in different systems, even though they are drastically different in their nature,” Geng says.

Original story by Bailey Bedford: https://jqi.umd.edu/news/when-superfluids-collide-physicists-find-mix-old-and-new

In addition to Campbell, who is also the Associate Vice President for Quantum Research and Education at UMD; Spielman; Geng; and Zhao, co-authors of the paper include former JQI postdoctoral researcher Shouvik Mukhherjee and NIST scientist and former JQI postdoctoral researcher Stephen Eckel.

With Passive Approach, New Chips Reliably Unlock Color Conversion

Over the past several decades, researchers have been making rapid progress in harnessing light to enable all sorts of scientific and industrial applications. From creating stupendously accurate clocks to processing the petabytes of information zipping through data centers, the demand for turnkey technologies that can reliably generate and manipulate light has become a global market worth hundreds of billions of dollars.

One challenge that has stymied scientists is the creation of a compact source of light that fits onto a chip, which makes it much easier to integrate with existing hardware. In particular, researchers have long sought to design chips that can convert one color of laser light into a rainbow of additional colors—a necessary ingredient for building certain kinds of quantum computers and making precision measurements of frequency or time.

Now, researchers at JQI have designed and tested new chips that reliably convert one color of light into a trio of hues. Remarkably, the chips all work without any active inputs or painstaking optimization—a major improvement over previous methods. The team described their results in the journal Science on Nov. 6, 2025.

The new chips are examples of photonic devices, which can corral individual photons, the quantum particles of light. Photonic devices split up, route, amplify and interfere streams of photons, much like how electronic devices manipulate the flow of electrons.

“One of the major obstacles in using integrated photonics as an on-chip light source is the lack of versatility and reproducibility,” says JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and a professor of physics at the University of Maryland. “Our team has taken a significant step toward overcoming these limitations.”

The new photonic devices are more than mere prisms. A prism splits multicolored light into its component colors, or frequencies, whereas these chips add entirely new colors that aren’t present in the incoming light. Being able to generate new frequencies of light directly on a chip saves the space and energy that would normally be taken up by additional lasers. And perhaps more importantly, in many cases lasers that shine at the newly generated frequencies don’t even exist.

The ability to generate new frequencies of light on a chip requires special interactions that researchers have been learning to engineer for decades. Ordinarily, the interactions between light and a photonic device are linear, which means the light can be bent or absorbed but its frequency won’t change (as in a prism). By contrast, nonlinear interactions occur when light is concentrated so intensely that it alters the behavior of the device, which in turn alters the light. This feedback can generate a panoply of different frequencies, which can be collected from the output of the chip and used for measurement, synchronization or a variety of other tasks. 

Unfortunately, nonlinear interactions are usually very weak. One of the first observations of a nonlinear optical process was reported in 1961, and it was so weak that someone involved in the publication process mistook the key data for a smudge and removed it from the main figure in the paper. That smudge was the subtle signature of second harmonic generation, in which two photons at a lower frequency are converted into one photon with double the frequency. Related processes can triple the frequency of incoming light, quadruple it, and so forth.

Since that first observation of second harmonic generation, scientists have discovered ways to boost the strength of nonlinear interactions in photonic devices. In the original demonstration, the state of the art was to simply shine a laser on a piece of quartz, taking advantage of the natural electrical properties of the crystal. These days researchers rely on meticulously engineered chips tailored with photonic resonators. The resonators guide the light in tight cycles, allowing it to circulate hundreds of thousands or millions of times before being released. Each single trip through a resonator adds a weak nonlinear interaction, but many trips combine into a much stronger effect. Yet there are still tradeoffs when trying to produce a particular set of new frequencies using a single resonator. 

“If you want to simultaneously have second harmonic generation, third harmonic generation, fourth harmonic—it gets harder and harder,” says Mahmoud Jalali Mehrabad, the lead author of the paper and a former postdoctoral researcher at JQI who is now a research scientist at MIT. “You usually compensate, or you sacrifice one of them to get good third harmonic generation but cannot get second harmonic generation, or vice versa.”

In an effort to avoid some of these tradeoffs, Hafezi and JQI Fellow Kartik Srinivasan, together with Electrical and Computer Engineering Professor Yanne Chembo at the University of Maryland (UMD), have previously pioneered ways of boosting nonlinear effects by using a hoard of tiny resonators that all work in concert. They showed in earlier work how a chip with hundreds of microscopic rings arranged into an array of resonators can amplify nonlinear effects and guide light around its edge. Last year, they showed that a chip patterned with such a grid could transmute a pulsed laser into a nested frequency comb—light with many equally spaced frequencies that is used for all kinds of high-precision measurements. However, it took many iterations to design chips with the right shape to generate the precise frequency comb they were after, and only some of their chips actually worked.

The fact that only a fraction of the chips worked is indicative of the maddening hit-or-miss nature of working with nonlinear devices. Designing a photonic chip requires balancing several things in order to generate an effect like frequency doubling. First, to double the frequency of light, a nonlinear resonator must support both the original frequency and the doubled frequency. Just as a plucked guitar string will only hum with certain tones, an optical resonator only hosts photons with certain frequencies, determined by its size and shape. But once you design a resonator with those frequencies locked in, you must also ensure that they circulate around the resonator at the same speed. If not, they will fall out of sync with each other, and the efficiency of the conversion will suffer.

Together these requirements are known as the frequency-phase matching conditions. In order to produce a useful device, researchers must simultaneously arrange for both conditions to match. Unfortunately, tiny nanometer-sized differences from chip to chip—which even the best chip makers in the world can’t avoid—will shift the resonant frequencies a little bit or change the speed at which they circulate. Those small changes are enough to wash out the finely tuned parameters in a chip and render the design useless for mass production.

One of the authors compared the predicament to the likelihood of spotting a solar eclipse. “If you want to actually see the eclipse, that means if you look up in the sky the moon has to overlap with the sun,” says Lida Xu, a co-lead author and a graduate student in physics at JQI. Getting reliable nonlinear effects out of photonic chips requires a similar kind of chance encounter.

Small misalignments in the frequency-phase matching conditions can be overcome with active compensation that adjusts the material properties of a resonator. But that involves building in little embedded heaters—a solution that both complicates the design and requires a separate power supply.Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)

In the new work, Xu, Mehrabad and their colleagues discovered that the array of resonators used in previous work already increases the chances of satisfying the frequency-phase matching conditions in a passive way—that is, without the use of any active compensation or numerous rounds of design. Instead of trying to engineer the precise frequencies they wanted to create and iterating the design of the chip in hopes of getting one that worked, they stepped back and considered whether the array of resonators produced any stable nonlinear effects across all the chips. When they checked, they were pleasantly surprised to find that their chips would generate second, third and even fourth harmonics for incoming light with a frequency of about 190 THz—a standard frequency used in telecommunications and fiber optic communication.

As they dug into the details, they realized that the reason all their chips worked was related to the structure of their resonator array. Light circulated quickly around the small rings in the array, which set a fast timescale. But there was also a “super-ring” formed by all the smaller rings, and light circulated around it more slowly. Having these two timescales in the chip had an important effect on the frequency-phase matching conditions that they hadn’t appreciated before. Instead of having to rely on meticulous design and active compensation to arrange for a particular frequency-phase matching condition, the two timescales provide researchers with multiple shots at nurturing the necessary interactions. In other words, the two timescales essentially provide the frequency-phase matching for free.

The researchers tested six different chips manufactured on the same wafer by sending in laser light with the standard 190 THz frequency, imaging a chip from above and analyzing the frequencies leaving an output port. They found that each chip was indeed generating the second, third and fourth harmonics, which for their input laser happened to be red, green and blue light. They also tested three single-ring devices. Even with the inclusion of embedded heaters to provide active compensation, they only saw second harmonic generation from one device over a narrow range of heater temperature and input frequency. By contrast, the two-timescale resonator arrays had no active compensation and worked over a relatively broad range of input frequencies. The researchers even showed that as they dialed up the intensity of their input light, the chips started to produce more frequencies around each of the harmonics, reminiscent of the nested frequency comb created in an earlier result.

The authors say that their framework could have broad implications for areas in which integrated photonics are already being used, especially in metrology, frequency conversion and nonlinear optical computing. And it can do it all without the hassle of active tuning or precise engineering to satisfy the frequency-phase matching conditions.

“We have simultaneously relaxed these alignment issues to a huge degree, and also in a passive way,” Mehrabad says. “We don't need heaters; we don't have heaters. They just work. It addresses a long-standing problem.”

Original story by Chris Cesare: With Passive Approach, New Chips Reliably Unlock Color Conversion | Joint Quantum Institute

In addition to Mehrabad, Hafezi, Srinivasan (who is also a Fellow of the National Institute of Standards and Technology), Chembo and Xu, the paper had several other authors: Gregory Moille, an associate research scientist at JQI; Christopher Flower, a former graduate student at JQI who is now a researcher at the Naval Research Laboratory; Supratik Sarkar, a graduate student in physics at JQI; Apurva Padhye, a graduate student in physics at JQI; Shao-Chien Ou, a graduate student in physics at JQI; Daniel Suarez-Forero, a former JQI postdoctoral researcher who is now an assistant professor of physics at the University of Maryland, Baltimore County; and Mahdi Ghafariasl, a postdoctoral researcher at JQI.

This research was funded by the Air Force Office of Scientific Research, the Army Research Office, the National Science Foundation and the Office of Naval Research.

Researchers Identify Groovy Way to Beat Diffraction Limit

Physics is full of pesky limits.

There are speed limits, like the speed of light. There are limits on how much matter and energy can be crammed into a region of space before it collapses into a black hole. There are even limits on more abstract things like the rate that information spreads through a network or the precision with which we can specify two physical quantities simultaneously—most notably expressed in the Heisenberg uncertainty principle.

Laser light faces its own set of limits, which are a nuisance to scientists who want to use lasers to engineer new kinds of interactions between light and matter. In particular, there’s an annoying impediment called the diffraction limit, which restricts how tightly a lens can focus a laser beam. Because light travels as a wave of electric and magnetic fields, it has a characteristic size called a wavelength. Depending on the wavelength, diffraction causes waves to bend and spread after passing through an opening. If the opening is big compared to the wavelength, there’s little diffraction. But once the opening gets to be around the size of the wavelength, diffraction causes the wave to spread out dramatically.A new chip made from silver efficiently guides energy to an experimental sample via an array of meticulously sized grooves. The chip delivers the energy from laser light with a wavelength of 800 nanometers to a material sample at a resolution of just a few dozen nanometers, sidestepping a limit that physics puts on laser beams. (Credit: Mahmoud Jalali Mehrabad/JQI)A new chip made from silver efficiently guides energy to an experimental sample via an array of meticulously sized grooves. The chip delivers the energy from laser light with a wavelength of 800 nanometers to a material sample at a resolution of just a few dozen nanometers, sidestepping a limit that physics puts on laser beams. (Credit: Mahmoud Jalali Mehrabad/JQI)

This behavior means that you can’t really squeeze a laser beam down to a spot smaller than its own wavelength—around a micron in the case of off-the-shelf optical lasers. The atoms that make up solid matter are 1,000 times smaller than these optical wavelengths, so it’s impossible to focus optical lasers down to the size of atoms and deliver their power with the surgical precision that researchers often seek. Ordinarily experiments just bathe a sample of matter in a wide beam, wasting most of the power carried by the laser.

One approach to overcoming this waste is to accept the limitations of the diffraction limit and increase the effective size of the matter, which researchers at JQI reported on in a result last year. The other approach is to defy the diffraction limit and figure out a way to cram the energy of the light into a smaller space anyway.

In a paper published earlier this year in the journal Science Advances, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and a professor of physics at UMD, and his colleagues showed a new way to sidestep the diffraction limit. They created a chip with a grooved layer of pure silver that accepts laser power in one spot and ferries it with high efficiency to a sample attached to the grooves a short distance away. Importantly, the power ends up being delivered along the chip in peaks spaced just a few dozen nanometers apart—defeating the diffraction limit by producing features much smaller than the wavelength of light that initially hits the chip. The authors say it promises to be a boon for researchers investigating light-matter interactions.

“Light-induced phenomena are a gigantic toolbox,” says Mahmoud Jalali Mehrabad, a former postdoctoral researcher at JQI who is now a research scientist at the Massachusetts Institute of Technology. “There’s photonic switches, light-induced superconductivity, light-induced magnetism—light-induced this, light induced-that. It's very common to use light to create a phenomenon or to control it.”

The silver grooves in the new chip are 60 nanometers wide and 160 nanometers deep, and they are each spaced 90 nanometers apart. At one end of the array of grooves, the silver has a grid pattern cut into it forming a photonic coupler—a pattern that takes laser light hitting the chip from above, bends it into the plane of the chip, and sends it into the grooves. Once the light reaches the grooves, it excites what the researchers call metasurface plasmon polaritons (MPPs), which are combined excitations of photons (particles of light) and electrons in the silver. It’s the MPPs that end up spaced just a few dozen nanometers apart as they travel down the grooves, delivering the laser power with a resolution far below the diffraction limit set by the wavelength of the laser light.

The size of the grooves was carefully calculated to ensure that the power from the laser traveled without leaking out. Even so, it was hard to fabricate chips that had the optimal power delivery at the right wavelength.

“Getting good quality chips that actually give you the peak transmission at the correct wavelength and the correct spatial diffraction pattern—that was very challenging,” says Supratik Sarkar, a graduate student in physics at JQI and the lead author of the paper. 

Sarkar designed scores of chips and worked closely with You Zhou, an assistant professor of materials science and engineering at UMD, and colleagues, who fabricated the chips. Sarkar then did the grunt work of testing them all to find the handful that worked well with the 800-nanometer laser in their experiment.

To show off the capabilities of their new design, Sarkar and the team performed a benchmark experiment, recreating the observation of a shift in the energy spectrum of an atomically thin material called molybdenum diselenide (MoSe2). MoSe2 contains quasiparticles called excitons, which are combinations of a free-moving electron and a hole—an electron vacancy in the material’s structure that acts like a mobile positively charged particle. It takes a little bit of energy to bind an electron to a hole, and, in the presence of an electric field, that energy can shift. The shift can be detected by shining a light and measuring the reflection to determine how much energy the excitons absorbed.

The researchers attached an MoSe2 sample across the top of several grooves on their silver chip, pulsed their 800-nanometer laser into the photonic coupler for a fraction of a second, and probed the sample by flashing a separate pulsed laser. They collected the light reflected by the MoSe2 sample using a microscope and a camera. They showed that—as expected—the exciton energy shifted by a small amount.

They performed the same experiment in the conventional way by pointing both the 800-nanometer laser and the probe laser directly at another MoSe2 sample, which was placed on a smooth sheet of silver. To make the comparison fair, they used a sheet of silver produced in the same way by Zhou’s lab, just without the grooves. They observed the same small energy shift in the excitons, validating their result with the grooved chip. Crucially, though, the conventional method required nearly 100 times more laser power than the method using their chip.

As another demonstration of the advantages of the new chip, the researchers also measured a clear signature that the MPPs traveling down the grooves could deliver more targeted power than the laser. The MPPs in neighboring grooves generated peaks and valleys where the electric field was stronger and weaker. This rolling landscape—which varied over dozens of nanometers instead of hundreds—altered the behavior of the excitons in the MoSe2 sample, causing their energy to shift. Since different excitons had different experiences of the modulated electric field, the energies of excitons across the sample varied slightly. Measurements with the new chip showed that this modulation broadened the set of energies that the excitons had—a feature that was absent from a similar experiment without the grooved chip.

The new chip also has some additional advantages. By separating where the input light is pumped into the chip from where the output light is collected from a sample, the new device can avoid two problems that plague typical experiments.

One problem is heating. When the pumped-in light hits a material sample directly, it tends to heat it up. The new chips require less pump power, which introduces less heat into the experiment. They also keep the power delivery far away from the sample—so distant that during a typical experiment any heat that is introduced to the chip won’t have enough time to reach the sample and interfere with its behavior.

The other problem in conventional experiments has to do with the pump light scattering off a sample and reflecting back into the camera used for measurement. It’s a bit like trying to see the stars during the day—like the sun, the reflected pump laser is so bright that it washes out all the pinprick details. Overcoming this glare normally requires tediously characterizing the pump light so that it can be subtracted from the measured light. But because the pump light is injected into the new chip far away from the sample, it significantly reduces the noise that ends up in the camera.

The authors say that they are now working with other groups who are interested in putting their samples onto one of the grooved chips. They also have plenty of ideas of their own for how to play with the new tool.

“This is very cool, because now you can have periodicity of light in a sub-diffraction sort of regime experienced by matter,” says Mehrabad, who was a co-lead author of the paper. “You can engineer lattice physics. You can open a band gap. You can do scattering. There is a lot of cool physics to be done with this.”

Original story by Chris Cesare: Researchers Identify Groovy Way to Beat Diffraction Limit | Joint Quantum Institute

In addition to Hafezi, Mehrabad, Sarkar, and Zhou the paper had several additional authors: Daniel Suárez-Forero, a co-lead author and former postdoctoral researcher at JQI who is now an assistant professor of physics at the University of Maryland, Baltimore County; Liuxin Gu, a co-lead author and a graduate student in materials science and engineering at UMD who helped fabricate the chips used in the experiments reported in the paper; Christopher Flower, a former physics graduate student at JQI; Lida Xu, a physics graduate student at JQI; Kenji Watanabe, a materials scientist at the National Institute for Materials Science (NIMS) in Japan; Takashi Taniguchi, a materials scientist at NIMS; Suji Park, a staff scientist at Brookhaven National Laboratory (BNL) in New York; and Houk Jang, a staff scientist at BNL.

This work was supported by the Army Research Office, the Defense Advanced Research Projects Agency, the National Science Foundation, and the Department of Energy.

Researchers Imagine Novel Quantum Foundations for Gravity

Questioning assumptions and imagining new explanations for familiar phenomena are often necessary steps on the way to scientific progress.

For example, humanity’s understanding of gravity has been overturned multiple times. For ages, people assumed heavier objects always fall quicker than lighter objects. Eventually, Galileo overturned that knowledge, and Newton went on to lay down the laws of motion and gravity. Einstein in turn questioned Newton’s version of gravity and produced the theory of general relativity, also known as Einstein's theory of gravity. Einstein imagined a new explanation of gravity connected to the curvature of space and time and revealed that Newton’s description of gravity was just a good approximation for human circumstances.Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)

Einstein’s theory of gravity has been confirmed with many experiments, but scientists studying gravity at the tiniest scales have uncovered lingering mysteries around the ubiquitous force. For miniscule things like atoms or electrons, the rules of quantum physics take over and interactions are defined by discrete values and particles. However, physicists haven’t developed an elegant way to definitively combine their understanding of gravity with the reality of quantum physics experiments. This lack of a quantum explanation makes gravity stand out as an enigma among the four fundamental forces­—the forces of gravity, the electromagnetic force, the strong nuclear force and the weak nuclear force. Every other force, like friction, pressure or tension, is really just one or more of those four forces in disguise.

To unravel gravity’s lingering idiosyncrasies, researchers are designing new experiments and working to identify the foundations of gravity at the quantum scale. For decades, scientists have been proposing alternative models, but none has emerged as the definitive explanation.

“We know how electromagnetism works,” says Daniel Carney, a scientist at Lawrence Berkeley National Laboratory (LBNL) who formerly worked as a postdoctoral researcher at JQI and the Joint Center for Quantum Information and Computer Science (QuICS). “We know how the strong and weak nuclear forces work. And we know how they work in quantum mechanics very precisely. And the question has always been, is gravity going to do the same thing? Is it going to obey the same kind of quantum mechanical laws?”

The three other fundamental forces are each associated with interactions where quantum particles pop into existence to transmit the force from one spot to another. For instance, electromagnetic forces can be understood as particles of light, called photons, moving around and mediating the electromagnetic force. Photons are ubiquitous and well-studied; they allow us to see, heat food with microwave ovens and listen to radio stations. 

Physicists have proposed that similar particles might carry the effect of gravity, dubbing the hypothetical particles gravitons. Many researchers favor the idea of gravitons existing and gravity following the same types of quantum laws as the other three fundamental forces. However, experiments have failed to turn up a single graviton, so some researchers are seeking alternatives, including questioning if gravity is a fundamental force at all. 

What might the world look like if gravity is different, and gravitons are nowhere to be found? In an article published in the journal Physical Review X on August 11, Carney, JQI Fellow Jacob Taylor and colleagues at LBNL and the University of California, Berkeley are laying the early groundwork for graviton-free descriptions of gravity. They presented two distinct models that each sketch out a vision of the universe without gravitons, proposing instead that gravity emerges from interactions between massive objects and a sea of quantum particles. If the models prove to be on the right track, they are still just a first step. Many details, like the exact nature of the quantum particles, would still need to be fleshed out.

In the new proposals, gravity isn’t a fundamental force like electromagnetism but is instead an emergent force like air pressure. The force created by air pressure doesn’t have the equivalent of a photon; instead, pressure results from countless gas molecules that exist independent of the force and behave individually. The unorganized molecules move in different directions, hit with different strengths, and sometimes work against each other, but on a human scale their combined effect is a steady push in one direction. 

Similarly, instead of including discrete gravitons that embody a fundamental force of gravity, the new models consider many interacting quantum particles whose combined behavior produces the pull of gravity. If gravity is an emergent force, researchers need to understand the quirks of the collective process so they can be on the lookout for any resulting telltale signs in experiments. 

The two models the group introduced in the paper are intentionally oversimplified—they are what physicists call toy models. The models remain hazy or flexible on many details, including the type of particles involved in the interactions. However, the simplicity of the models gives researchers a convenient starting point for exploring ideas and eventually building up to more complex and realistic explanations.

“We’re using these toy models … because we understand that there are many differences between this sort of microscopic model we proposed here and a model that is consistent with general relativity,” says Taylor, who is also a QuICS Fellow and was also a physicist at the National Institute of Standards and Technology when the research was conducted. “So rather than assume how to get there, we need to find the first steps in the path.”

The initial steps include laying out potential explanations and identifying the signature each would produce in experiments. Both Taylor and Carney have spent about a decade thinking about how to make grounded predictions from quantum theories of gravity. In particular, they have been interested in the possibility of gravity resulting from many particles interacting and coming to equilibrium at a shared temperature. 

They were inspired by research by University of Maryland Physics professor Ted Jacobson that hinted at black holes and Einstein’s theory of gravity being linked to thermodynamics. Thermodynamics is the physics of temperatures and the way that energy, generally in the form of heat, moves around and influences large groups of particles. Thermodynamics is crucial to understanding everything from ice cream melting to stars forming. Similarly, the researchers think a theory of gravity might be best understood as the result of many interacting particles producing a collective effect tied to their temperature.

However, while there are theoretical clues that a thermodynamic foundation of gravity might exist, experiments haven’t provided researchers with any indication of what sort of quantum particles and interactions might be behind an emergent form of gravity. Without experimental evidence supporting any choice, researchers have been free to propose any type of quantum particle and any form of interaction to be the hypothetical cause of gravity. 

Taylor and Carney started with the goal of recreating the basic gravitational behaviors described by Newton instead of immediately attempting to encompass all of Einstein’s theory. A key feature described by Newton is the very particular way that gravity gets weaker as separation increases: Gravity always falls off at a rate proportional to the square of the distance between two objects, called the inverse-square force law. The law means that as you move away from the Earth, or some other mass, its gravitational pull decreases at a quicker and quicker rate. But identifying quantum interactions with matter that could create even that general behavior wasn’t trivial, and that first step to imagining a new form of gravity eluded researchers.

In the fall of last year, Carney and Manthos Karydas, a postdoctoral researcher working with Carney at LBNL who is also an author of the paper, worked out a simple model of quantum interactions that could capture the needed law. After Carney discussed the idea with Taylor, they were able to formulate a second distinct model with an alternative type of interaction.

“Dan came into my office and outlined the basic mechanism on the chalkboard,” Karydas says. “I found it very elegant, though his initial model gave a constant force between the masses. With some refinement, we managed to recover the inverse-square force law we had been aiming for.”

Both models assume there are many particles at a given temperature that can interact with all the masses included in the model. Unlike gravitons, these new particles can be understood as having a more permanent independent existence independent from gravity.

For convenience, they created the models where the sea of quantum particles were all spins, which behave like tiny magnets that tend to align with magnetic fields. A vast variety of quantum objects can be described as spins, and they are ubiquitous in quantum research.

In one of the models, which the team called the local model, the quantum spins are spread evenly on a grid, and their interactions depend on their position relative to both the masses and each other. Whenever a massive object is placed somewhere on the grid it interacts with the nearby spins making them more likely to point in the same direction. And when it moves through the crowd, a cloud of quantum influence accompanies it. 

The clouds of coordination around a mass can combine when two masses approach one another. The combination of their influence into the same space decreases the energy stored in the surrounding quantum particles, drawing the masses toward each other.

In contrast, the original model that Carney and Karydas developed doesn’t paint a clear picture of how the spins are distributed and behave in space. They were inspired by the way waves behave when trapped between objects: When light is trapped between two mirrors or sound waves are trapped between two walls, only waves of specific lengths are stable for any particular spacing between the objects. You can define a clear set of all the waves that neatly fit into the given space.

While the particles in the model are spins and not waves, properties of their interactions resemble waves that must neatly fit between the two masses. Each spin interacts with every possible pair of masses in this wave-like way. The group dubbed this model the “non-local model” since the interactions don’t depend on where the quantum particles or masses are located individually but just on the distance between the masses. Since the positions of the spins don’t influence anything, the model doesn’t describe their arrangement in space at all. The group showed that the appropriate set of wave-like interactions can make the quantum particles store less energy when objects are close together, which will pull the objects towards each other.

“The nonlocal model seemed kind of bizarre when we first were writing it down,” Taylor says. “And yet, why should we guess which one is correct? We don't think either of them is correct in the fundamental sense; by including them both, we're being clear to the physics community that these are ways to get started without presupposing where to go.”

The particles being spins isn’t an essential feature of the models. The team demonstrated that other types of particles are worth considering by redoing their work on the non-local model for an alternative type of particle. They showed that the wave-like interactions could also produce gravity if the proposed particles were quantum harmonic oscillators, which can bounce or swing between states similar to springs and pendulums. 

The group’s calculations illustrate that both types of quantum interactions could produce a force with the signature behavior of Newton’s gravity, and the team described how the details of the interactions can be tailored so that the strength of the force matches what we see in reality. However, neither model begins to capture the intricacies of Einstein’s theory of gravity. 

“This is not a new theory of gravity,” Taylor says. “I want to be super clear about this. This is a way to reason about how thermodynamic models, including possibly those of gravity, could impact what you can observe in the lab.”

Despite the intentional oversimplification of both models, they still provide insights into what results researchers might see in future experiments. For instance, the interactions of the particles in both models can impact how much noise—random fluctuations—gravity imparts on objects as it pulls on them. In experiments, some noise is expected to come from errors introduced by the measurement equipment itself, but in these models, there is also an inescapable amount of noise produced by gravity. 

The many interactions of quantum particles shouldn’t produce a steady pull of gravity but instead impart tiny shifts of momentum that produce the gravitational force on average. It is similar to the miniscule, generally imperceptible kicks of individual gas molecules collectively producing air pressure: Gravity in the models at large scales seems like a constant force, but on the small scale, it is actually the uneven pitter patter of interactions tugging irregularly. So as researchers make more and more careful measurements of gravity, they can keep an eye out for a fluttering that they can’t attribute to their measurement technique and check if it fits with an emergent explanation of gravity. 

While the two models share some common features, they still produce slightly different predictions. For instance, the non-local model only predicts noise if at least two masses are present, but the local model predicts that even a solitary mass will constantly be buffeted by random fluctuations.

Moving forward, these models need to be compared to results from cutting-edge experiments measuring gravity and improved to capture additional phenomena, such as traveling distortions of space called gravitational waves, that are described by Einstein’s theory of gravity. 

“The clear next thing to do, which we are trying to do now, is make a model that has gravitational waves because we know those exist in nature,” Carney says. “So clearly, if this is going to really work as a model of nature, we have to start reproducing more and more things like that.”

Story by Bailey Bedford

In addition to Carney, Karydas and Taylor, co-authors of the paper include Thilo Scharnhorst, a graduate student at the University of California, Berkely (UCB), and Roshni Singh, a graduate student at UCB and LBNL.

A Cosmic Photographer: Decades of Work to Get the Perfect Shot

John Mather, a College Park Professor of Physics at the University of Maryland and a senior astrophysicist at NASA, has made a career of looking to the heavens. He has led projects that have revealed invisible stories written across the sky and helped us understand our place in the universe.

He left his mark on physics by uncovering the earliest chapter of our universe’s story. He and his colleagues captured an image of the invisible remains of some of the universe’s first light. To get the image, they built and used NASA’s Cosmic Background Explorer (COBE) satellite, which Mather played a key role in making a reality in 1989. Researchers used the images of the primordial light, called the cosmic microwave background radiation, to confirm that the universe burst forth from a very hot and dense early state—a process commonly called the big bang. In 2006, Mather shared the Nobel Prize in physics for the work.

After COBE, Mather became a senior project scientist on NASA’s James Webb Space Telescope (JWST) in 1995. He worked for more than a quarter of a century to make the state-of-the-art telescope a reality before it finally launched in December of 2021.

But Mather wasn’t ready to end his career when the JWST became a reality. The launch of the JWST heralded a new chapter for him, in which he splits his time between sharing the JWST’s results with the world and developing new projects to uncover more of the universe’s mysteries.

JWST: A Long-Haul Effort

Launching the JWST was the start of its story as a tool for scientific discovery, but it was also the conclusion of a massive effort by Mather and many others. Mather had been part of the JWST team since the beginning. He worked on the original proposal in 1995 and proceeded to spend the next decades helping engineers design the telescope; coordinating with team members from Europe, Canada and across the US; and generally working to keep the project on track.

The years of effort produced an array of mirrors designed to unfold into a 21-foot-wide final configuration. The delicate mirrors and necessary equipment were placed on top of a rocket, and Mather and his colleagues put their faith into their years of preparation.

As the final seconds to the launch counted down, Mather watched the fate of the mission play out from his sofa at home. The JWST team had a busy schedule planned for months after the launch, and they didn’t want cases of COVID-19, or anything else, disrupting their carefully laid plans.

“Nobody was allowed to go anywhere, to take any chances with catching that bug,” Mather said. “Because we needed them to be alive and ready to work at any moment.”

The launch went off without a hitch, but that didn’t mean the team could breathe a sigh of relief. It was still possible the telescope could fail to produce any images. The telescope had to travel almost a million miles to its final orbit, successfully unfold itself and calibrate multiple components before researchers could tell if it was actually working.

Its predecessor, the Hubble Space Telescope, couldn’t take images in focus when it was first deployed because of a slightly misshapen mirror. A similar issue would be much more devastating for the JWST because its final destination was almost 3,000 times farther from Earth—about four times farther than the moon. So any repair visit would be impractical and unlikely to be attempted.

“The sort of moment of truth was the first image we got which showed focus,” Mather said. “About 40 people or so were assembled in the control rooms at the Space Telescope Science Institute. They all got to look at this wonderful image at the same time, and it was covered with galaxies. So we knew that not only had we done a great engineering job but there were things to study everywhere.”

JWST: Reaping the Benefits 

The JWST has so much to study because it can see much farther than its predecessors. When light travels far enough, the waves making it up get stretched out and becomes harder to see (the universe itself is expanding which stretches out light along with it). As planned, detecting ancient light has revealed objects from the earliest periods of the universe that scientists have ever seen (after the messy period that produced the microwave background radiation). With this new window into the past, scientists have confirmed theories, such as how galaxies take time to spin themselves into shape, as well as uncovered new mysteries, like spotting unexpectedly bright galaxies in the early universe.

Besides capturing stretched-out light, the JWST has another tool for observing the farthest reaches of space. Like a photographer pulling out a high-powered lens to capture a distant subject, the JWST has tools for zooming in on distant corners of the universe. NASA didn’t have to make them; the JWST takes advantage of natural lenses that are formed by the gravity of many galaxies that are clustered together. The collective gravity warps space and makes a gravitational lens that directs light along a curved path similar to how a glass lens bends light.

A gravitational lens took center stage in the first JWST image released to the public and revealed the glittery details of one of Mather’s favorite galaxies to talk about—the “Sparkler Galaxy.” The signature sparkles are dense clusters of stars that are important for understanding the initial formation of a galaxy.

The JWST isn’t only revealing the distant universe; it is also giving us better snapshots of our own neighborhood. The specialized cameras on the JWST have been used to detect light carrying the signatures of interactions with specific molecules. Researchers have used this to study other planets and moons in our solar system.

“I was ignorant about the solar system, and I am really surprised and pleased to see that we're able to map the presence of molecules on the satellites in our solar system,” Mather said. “We see that on Titan, which is a satellite of Saturn, we're able to make a map of where different molecules are, and that's interesting, because it's the only satellite in the solar system that has an atmosphere of its own to speak of.”

The data from inside and outside our solar system keep pouring in, and researchers continue to propose new ways the JWST can advance science. After the team was sure the project was running smoothly, Mather handed over his position as the JWST’s senior project scientist to Jane Rigby in 2023. But that doesn’t mean he hasn’t been keeping an eye on the mission.

“Following the conclusion of my work on the James Webb Space Telescope, I follow along the science that's being produced, and I give a lot of public talks about that,” Mather said. “I really enjoy doing that because people want to know what we found, and they are still thrilled with the brilliant engineering.”

Orbiting Starshades: Going the Distance to Get the Shot

While the JWST results continue to excite Mather, he wanted to return to his roots problem-solving and developing projects to uncover new pieces of the heavens.

“I enjoy the creative part at the beginning, and after you get past that, then I'm a little nervous and impatient, and my job was basically running a lot of meetings for a long time, and that's not as much fun as thinking of something new to work on, for me,” Mather said. “It's definitely important to do, but it's just a different thing.”

The new project that has caught Mather’s interest is getting the perfect lighting to photograph planets in other solar systems—exoplanets. To do so, he wants to put a satellite, called a starshade, into orbit. A starshade would obstruct the light of a star before it reaches a telescope, but they need to be outside the atmosphere to work. One could be paired with a telescope that is also in space, like the Hubble Space Telescope, but Mather thinks they have the greatest potential when partnered with the massive telescopes we build on the ground.

Obstructing the light from a star should allow the telescope to pick up the much dimmer light reflected by a planet orbiting it. It’s like watching a plane flying in the same part of the sky as the sun: To avoid being blinded, you raise your hand to block out the sun.

By blocking a star’s light, a telescope can not only spot nearby planets but also detect the signature of molecules, like oxygen and water, that the light interacted with when it passed through a planet’s atmosphere. Such measurements would dramatically upgrade our ability to discover and study many more planets throughout the universe.

Current methods of identifying exoplanets generally rely on observing a planet’s gravitational influence on a star or detecting it pass between its star and us (we notice a slight dimming of the star, rather than actually observing the planet). These approaches let us discover planets around stars that are much smaller than our sun or detect large planets—similar to the gas giants in our solar system—that are near their star. But the available techniques leave us effectively blind to the planets most like Earth.

However, before they can hunt for Earth-like exoplanets, researchers must solve the unique challenges of getting a working starshade in orbit. A planet can be billions of times dimmer than the star, and because of the vast distances between us and other solar systems, planets and their sun are almost indistinguishable specks. To get the right lighting, scientists must place the starshade in front of the star without accidentally covering the planet right next to it.

They must also account for the fact that light sometimes deviates from a straight-line path. Light travelling from one medium, like air to water or thin air to dense air, shifts its direction (stars “twinkle” because of these distortions occurring as its light travels through Earth’s atmosphere). Light also changes its direction by bending around the edges of objects—including the edges of the starshade.

Combining all the known constraints gave Mather and his colleagues strict requirements for designing a starshade to work with a telescope on the ground.

“It needs to be a pointy sunflower, 100 meters in diameter, located at least 175,000 kilometers away from us in orbit around the Earth,” Mather said. “So that's huge. And the normal ways we would build something like that would make it also very heavy.”

The petals of the massive flower shape that researchers have settled on ensure the stray light deflected around them doesn’t get sent toward the center of a telescope. But the potential bulk of the structure has a cost; heavy satellites are expensive to launch and difficult to maneuver into position. So now Mather and his colleagues are brainstorming ways to make the starshade as light as possible.

One of the approaches they are considering is making it inflatable: Cut a sheet into the right shape and make a balloon frame to support it. But the approach leaves them concerned about the whole thing popping. While space is mostly empty, there are small objects—micrometeorites—zipping around, and over time collisions happen. So Mather and his colleagues also need to make the starshade durable.

A key idea they are pursuing is sending up multiple layers of sheets so that when a micrometeorite slams through them, the different layers can still block out most of a star’s light. It’s only an issue if the star’s light happens to follow the exact same trajectory as one of the micrometeorites. However, the team still needs a way to reinforce the inflatable framework to survive collisions.

The team is considering building the frame using resins or other materials that could undergo a chemical transformation into a sturdy structure after being deployed into shape. Another idea they are playing with is to deflate the starshade when it is not in use so that it is a smaller target and will get hit less often.

While developing the starshade, Mather is also pursuing related projects, like putting a stable standard light source—an artificial star—in orbit to aid ground-based telescopes. Having a steady light at a known brightness in the sky can help astronomers study stars. Astronomers don’t always know the actual brightness of objects they see through telescopes, and analysis is complicated because the atmosphere distorts the light before it reaches the telescope. Having a steady light above the atmosphere gives astronomers a point of comparison for determining the true brightness of what they observe. More importantly, it can also help them reverse engineer the distortions of the atmosphere and piece together the original image.

This technique will support future experiments using orbiting starshades since any light from the planet that reaches the ground will be distorted and require correction. Mather is part of a project led by George Mason University researchers that plans to put an artificial star into orbit in 2029.

Mather is also throwing his support behind other projects that are further into their development, like the Black Hole Explorer, which aims to observe light that has orbited black holes. While Mather’s various projects generally look into the far reaches of space, he’s still invested in learning about our home. Both Mather’s past and upcoming work explore our origins as they open up the wider universe to us.

“We actually said we were going to try to discover our own history by looking at the history of other places,” Mather said. “So what's the history of our own galaxy? Well, you can't really tell, but you can look at the formation of galaxies. You can look back in time by looking at things that are far away. So we're getting a photo album of ourselves by looking at our cousins way out there and seeing what were they like when they were young.”

Written by Bailey Bedford