Researchers Imagine Novel Quantum Foundations for Gravity

Questioning assumptions and imagining new explanations for familiar phenomena are often necessary steps on the way to scientific progress.

For example, humanity’s understanding of gravity has been overturned multiple times. For ages, people assumed heavier objects always fall quicker than lighter objects. Eventually, Galileo overturned that knowledge, and Newton went on to lay down the laws of motion and gravity. Einstein in turn questioned Newton’s version of gravity and produced the theory of general relativity, also known as Einstein's theory of gravity. Einstein imagined a new explanation of gravity connected to the curvature of space and time and revealed that Newton’s description of gravity was just a good approximation for human circumstances.Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)

Einstein’s theory of gravity has been confirmed with many experiments, but scientists studying gravity at the tiniest scales have uncovered lingering mysteries around the ubiquitous force. For miniscule things like atoms or electrons, the rules of quantum physics take over and interactions are defined by discrete values and particles. However, physicists haven’t developed an elegant way to definitively combine their understanding of gravity with the reality of quantum physics experiments. This lack of a quantum explanation makes gravity stand out as an enigma among the four fundamental forces­—the forces of gravity, the electromagnetic force, the strong nuclear force and the weak nuclear force. Every other force, like friction, pressure or tension, is really just one or more of those four forces in disguise.

To unravel gravity’s lingering idiosyncrasies, researchers are designing new experiments and working to identify the foundations of gravity at the quantum scale. For decades, scientists have been proposing alternative models, but none has emerged as the definitive explanation.

“We know how electromagnetism works,” says Daniel Carney, a scientist at Lawrence Berkeley National Laboratory (LBNL) who formerly worked as a postdoctoral researcher at JQI and the Joint Center for Quantum Information and Computer Science (QuICS). “We know how the strong and weak nuclear forces work. And we know how they work in quantum mechanics very precisely. And the question has always been, is gravity going to do the same thing? Is it going to obey the same kind of quantum mechanical laws?”

The three other fundamental forces are each associated with interactions where quantum particles pop into existence to transmit the force from one spot to another. For instance, electromagnetic forces can be understood as particles of light, called photons, moving around and mediating the electromagnetic force. Photons are ubiquitous and well-studied; they allow us to see, heat food with microwave ovens and listen to radio stations. 

Physicists have proposed that similar particles might carry the effect of gravity, dubbing the hypothetical particles gravitons. Many researchers favor the idea of gravitons existing and gravity following the same types of quantum laws as the other three fundamental forces. However, experiments have failed to turn up a single graviton, so some researchers are seeking alternatives, including questioning if gravity is a fundamental force at all. 

What might the world look like if gravity is different, and gravitons are nowhere to be found? In an article published in the journal Physical Review X on August 11, Carney, JQI Fellow Jacob Taylor and colleagues at LBNL and the University of California, Berkeley are laying the early groundwork for graviton-free descriptions of gravity. They presented two distinct models that each sketch out a vision of the universe without gravitons, proposing instead that gravity emerges from interactions between massive objects and a sea of quantum particles. If the models prove to be on the right track, they are still just a first step. Many details, like the exact nature of the quantum particles, would still need to be fleshed out.

In the new proposals, gravity isn’t a fundamental force like electromagnetism but is instead an emergent force like air pressure. The force created by air pressure doesn’t have the equivalent of a photon; instead, pressure results from countless gas molecules that exist independent of the force and behave individually. The unorganized molecules move in different directions, hit with different strengths, and sometimes work against each other, but on a human scale their combined effect is a steady push in one direction. 

Similarly, instead of including discrete gravitons that embody a fundamental force of gravity, the new models consider many interacting quantum particles whose combined behavior produces the pull of gravity. If gravity is an emergent force, researchers need to understand the quirks of the collective process so they can be on the lookout for any resulting telltale signs in experiments. 

The two models the group introduced in the paper are intentionally oversimplified—they are what physicists call toy models. The models remain hazy or flexible on many details, including the type of particles involved in the interactions. However, the simplicity of the models gives researchers a convenient starting point for exploring ideas and eventually building up to more complex and realistic explanations.

“We’re using these toy models … because we understand that there are many differences between this sort of microscopic model we proposed here and a model that is consistent with general relativity,” says Taylor, who is also a QuICS Fellow and was also a physicist at the National Institute of Standards and Technology when the research was conducted. “So rather than assume how to get there, we need to find the first steps in the path.”

The initial steps include laying out potential explanations and identifying the signature each would produce in experiments. Both Taylor and Carney have spent about a decade thinking about how to make grounded predictions from quantum theories of gravity. In particular, they have been interested in the possibility of gravity resulting from many particles interacting and coming to equilibrium at a shared temperature. 

They were inspired by research by University of Maryland Physics professor Ted Jacobson that hinted at black holes and Einstein’s theory of gravity being linked to thermodynamics. Thermodynamics is the physics of temperatures and the way that energy, generally in the form of heat, moves around and influences large groups of particles. Thermodynamics is crucial to understanding everything from ice cream melting to stars forming. Similarly, the researchers think a theory of gravity might be best understood as the result of many interacting particles producing a collective effect tied to their temperature.

However, while there are theoretical clues that a thermodynamic foundation of gravity might exist, experiments haven’t provided researchers with any indication of what sort of quantum particles and interactions might be behind an emergent form of gravity. Without experimental evidence supporting any choice, researchers have been free to propose any type of quantum particle and any form of interaction to be the hypothetical cause of gravity. 

Taylor and Carney started with the goal of recreating the basic gravitational behaviors described by Newton instead of immediately attempting to encompass all of Einstein’s theory. A key feature described by Newton is the very particular way that gravity gets weaker as separation increases: Gravity always falls off at a rate proportional to the square of the distance between two objects, called the inverse-square force law. The law means that as you move away from the Earth, or some other mass, its gravitational pull decreases at a quicker and quicker rate. But identifying quantum interactions with matter that could create even that general behavior wasn’t trivial, and that first step to imagining a new form of gravity eluded researchers.

In the fall of last year, Carney and Manthos Karydas, a postdoctoral researcher working with Carney at LBNL who is also an author of the paper, worked out a simple model of quantum interactions that could capture the needed law. After Carney discussed the idea with Taylor, they were able to formulate a second distinct model with an alternative type of interaction.

“Dan came into my office and outlined the basic mechanism on the chalkboard,” Karydas says. “I found it very elegant, though his initial model gave a constant force between the masses. With some refinement, we managed to recover the inverse-square force law we had been aiming for.”

Both models assume there are many particles at a given temperature that can interact with all the masses included in the model. Unlike gravitons, these new particles can be understood as having a more permanent independent existence independent from gravity.

For convenience, they created the models where the sea of quantum particles were all spins, which behave like tiny magnets that tend to align with magnetic fields. A vast variety of quantum objects can be described as spins, and they are ubiquitous in quantum research.

In one of the models, which the team called the local model, the quantum spins are spread evenly on a grid, and their interactions depend on their position relative to both the masses and each other. Whenever a massive object is placed somewhere on the grid it interacts with the nearby spins making them more likely to point in the same direction. And when it moves through the crowd, a cloud of quantum influence accompanies it. 

The clouds of coordination around a mass can combine when two masses approach one another. The combination of their influence into the same space decreases the energy stored in the surrounding quantum particles, drawing the masses toward each other.

In contrast, the original model that Carney and Karydas developed doesn’t paint a clear picture of how the spins are distributed and behave in space. They were inspired by the way waves behave when trapped between objects: When light is trapped between two mirrors or sound waves are trapped between two walls, only waves of specific lengths are stable for any particular spacing between the objects. You can define a clear set of all the waves that neatly fit into the given space.

While the particles in the model are spins and not waves, properties of their interactions resemble waves that must neatly fit between the two masses. Each spin interacts with every possible pair of masses in this wave-like way. The group dubbed this model the “non-local model” since the interactions don’t depend on where the quantum particles or masses are located individually but just on the distance between the masses. Since the positions of the spins don’t influence anything, the model doesn’t describe their arrangement in space at all. The group showed that the appropriate set of wave-like interactions can make the quantum particles store less energy when objects are close together, which will pull the objects towards each other.

“The nonlocal model seemed kind of bizarre when we first were writing it down,” Taylor says. “And yet, why should we guess which one is correct? We don't think either of them is correct in the fundamental sense; by including them both, we're being clear to the physics community that these are ways to get started without presupposing where to go.”

The particles being spins isn’t an essential feature of the models. The team demonstrated that other types of particles are worth considering by redoing their work on the non-local model for an alternative type of particle. They showed that the wave-like interactions could also produce gravity if the proposed particles were quantum harmonic oscillators, which can bounce or swing between states similar to springs and pendulums. 

The group’s calculations illustrate that both types of quantum interactions could produce a force with the signature behavior of Newton’s gravity, and the team described how the details of the interactions can be tailored so that the strength of the force matches what we see in reality. However, neither model begins to capture the intricacies of Einstein’s theory of gravity. 

“This is not a new theory of gravity,” Taylor says. “I want to be super clear about this. This is a way to reason about how thermodynamic models, including possibly those of gravity, could impact what you can observe in the lab.”

Despite the intentional oversimplification of both models, they still provide insights into what results researchers might see in future experiments. For instance, the interactions of the particles in both models can impact how much noise—random fluctuations—gravity imparts on objects as it pulls on them. In experiments, some noise is expected to come from errors introduced by the measurement equipment itself, but in these models, there is also an inescapable amount of noise produced by gravity. 

The many interactions of quantum particles shouldn’t produce a steady pull of gravity but instead impart tiny shifts of momentum that produce the gravitational force on average. It is similar to the miniscule, generally imperceptible kicks of individual gas molecules collectively producing air pressure: Gravity in the models at large scales seems like a constant force, but on the small scale, it is actually the uneven pitter patter of interactions tugging irregularly. So as researchers make more and more careful measurements of gravity, they can keep an eye out for a fluttering that they can’t attribute to their measurement technique and check if it fits with an emergent explanation of gravity. 

While the two models share some common features, they still produce slightly different predictions. For instance, the non-local model only predicts noise if at least two masses are present, but the local model predicts that even a solitary mass will constantly be buffeted by random fluctuations.

Moving forward, these models need to be compared to results from cutting-edge experiments measuring gravity and improved to capture additional phenomena, such as traveling distortions of space called gravitational waves, that are described by Einstein’s theory of gravity. 

“The clear next thing to do, which we are trying to do now, is make a model that has gravitational waves because we know those exist in nature,” Carney says. “So clearly, if this is going to really work as a model of nature, we have to start reproducing more and more things like that.”

Story by Bailey Bedford

In addition to Carney, Karydas and Taylor, co-authors of the paper include Thilo Scharnhorst, a graduate student at the University of California, Berkely (UCB), and Roshni Singh, a graduate student at UCB and LBNL.

A Cosmic Photographer: Decades of Work to Get the Perfect Shot

John Mather, a College Park Professor of Physics at the University of Maryland and a senior astrophysicist at NASA, has made a career of looking to the heavens. He has led projects that have revealed invisible stories written across the sky and helped us understand our place in the universe.

He left his mark on physics by uncovering the earliest chapter of our universe’s story. He and his colleagues captured an image of the invisible remains of some of the universe’s first light. To get the image, they built and used NASA’s Cosmic Background Explorer (COBE) satellite, which Mather played a key role in making a reality in 1989. Researchers used the images of the primordial light, called the cosmic microwave background radiation, to confirm that the universe burst forth from a very hot and dense early state—a process commonly called the big bang. In 2006, Mather shared the Nobel Prize in physics for the work.

After COBE, Mather became a senior project scientist on NASA’s James Webb Space Telescope (JWST) in 1995. He worked for more than a quarter of a century to make the state-of-the-art telescope a reality before it finally launched in December of 2021.

But Mather wasn’t ready to end his career when the JWST became a reality. The launch of the JWST heralded a new chapter for him, in which he splits his time between sharing the JWST’s results with the world and developing new projects to uncover more of the universe’s mysteries.

JWST: A Long-Haul Effort

Launching the JWST was the start of its story as a tool for scientific discovery, but it was also the conclusion of a massive effort by Mather and many others. Mather had been part of the JWST team since the beginning. He worked on the original proposal in 1995 and proceeded to spend the next decades helping engineers design the telescope; coordinating with team members from Europe, Canada and across the US; and generally working to keep the project on track.

The years of effort produced an array of mirrors designed to unfold into a 21-foot-wide final configuration. The delicate mirrors and necessary equipment were placed on top of a rocket, and Mather and his colleagues put their faith into their years of preparation.

As the final seconds to the launch counted down, Mather watched the fate of the mission play out from his sofa at home. The JWST team had a busy schedule planned for months after the launch, and they didn’t want cases of COVID-19, or anything else, disrupting their carefully laid plans.

“Nobody was allowed to go anywhere, to take any chances with catching that bug,” Mather said. “Because we needed them to be alive and ready to work at any moment.”

The launch went off without a hitch, but that didn’t mean the team could breathe a sigh of relief. It was still possible the telescope could fail to produce any images. The telescope had to travel almost a million miles to its final orbit, successfully unfold itself and calibrate multiple components before researchers could tell if it was actually working.

Its predecessor, the Hubble Space Telescope, couldn’t take images in focus when it was first deployed because of a slightly misshapen mirror. A similar issue would be much more devastating for the JWST because its final destination was almost 3,000 times farther from Earth—about four times farther than the moon. So any repair visit would be impractical and unlikely to be attempted.

“The sort of moment of truth was the first image we got which showed focus,” Mather said. “About 40 people or so were assembled in the control rooms at the Space Telescope Science Institute. They all got to look at this wonderful image at the same time, and it was covered with galaxies. So we knew that not only had we done a great engineering job but there were things to study everywhere.”

JWST: Reaping the Benefits 

The JWST has so much to study because it can see much farther than its predecessors. When light travels far enough, the waves making it up get stretched out and becomes harder to see (the universe itself is expanding which stretches out light along with it). As planned, detecting ancient light has revealed objects from the earliest periods of the universe that scientists have ever seen (after the messy period that produced the microwave background radiation). With this new window into the past, scientists have confirmed theories, such as how galaxies take time to spin themselves into shape, as well as uncovered new mysteries, like spotting unexpectedly bright galaxies in the early universe.

Besides capturing stretched-out light, the JWST has another tool for observing the farthest reaches of space. Like a photographer pulling out a high-powered lens to capture a distant subject, the JWST has tools for zooming in on distant corners of the universe. NASA didn’t have to make them; the JWST takes advantage of natural lenses that are formed by the gravity of many galaxies that are clustered together. The collective gravity warps space and makes a gravitational lens that directs light along a curved path similar to how a glass lens bends light.

A gravitational lens took center stage in the first JWST image released to the public and revealed the glittery details of one of Mather’s favorite galaxies to talk about—the “Sparkler Galaxy.” The signature sparkles are dense clusters of stars that are important for understanding the initial formation of a galaxy.

The JWST isn’t only revealing the distant universe; it is also giving us better snapshots of our own neighborhood. The specialized cameras on the JWST have been used to detect light carrying the signatures of interactions with specific molecules. Researchers have used this to study other planets and moons in our solar system.

“I was ignorant about the solar system, and I am really surprised and pleased to see that we're able to map the presence of molecules on the satellites in our solar system,” Mather said. “We see that on Titan, which is a satellite of Saturn, we're able to make a map of where different molecules are, and that's interesting, because it's the only satellite in the solar system that has an atmosphere of its own to speak of.”

The data from inside and outside our solar system keep pouring in, and researchers continue to propose new ways the JWST can advance science. After the team was sure the project was running smoothly, Mather handed over his position as the JWST’s senior project scientist to Jane Rigby in 2023. But that doesn’t mean he hasn’t been keeping an eye on the mission.

“Following the conclusion of my work on the James Webb Space Telescope, I follow along the science that's being produced, and I give a lot of public talks about that,” Mather said. “I really enjoy doing that because people want to know what we found, and they are still thrilled with the brilliant engineering.”

Orbiting Starshades: Going the Distance to Get the Shot

While the JWST results continue to excite Mather, he wanted to return to his roots problem-solving and developing projects to uncover new pieces of the heavens.

“I enjoy the creative part at the beginning, and after you get past that, then I'm a little nervous and impatient, and my job was basically running a lot of meetings for a long time, and that's not as much fun as thinking of something new to work on, for me,” Mather said. “It's definitely important to do, but it's just a different thing.”

The new project that has caught Mather’s interest is getting the perfect lighting to photograph planets in other solar systems—exoplanets. To do so, he wants to put a satellite, called a starshade, into orbit. A starshade would obstruct the light of a star before it reaches a telescope, but they need to be outside the atmosphere to work. One could be paired with a telescope that is also in space, like the Hubble Space Telescope, but Mather thinks they have the greatest potential when partnered with the massive telescopes we build on the ground.

Obstructing the light from a star should allow the telescope to pick up the much dimmer light reflected by a planet orbiting it. It’s like watching a plane flying in the same part of the sky as the sun: To avoid being blinded, you raise your hand to block out the sun.

By blocking a star’s light, a telescope can not only spot nearby planets but also detect the signature of molecules, like oxygen and water, that the light interacted with when it passed through a planet’s atmosphere. Such measurements would dramatically upgrade our ability to discover and study many more planets throughout the universe.

Current methods of identifying exoplanets generally rely on observing a planet’s gravitational influence on a star or detecting it pass between its star and us (we notice a slight dimming of the star, rather than actually observing the planet). These approaches let us discover planets around stars that are much smaller than our sun or detect large planets—similar to the gas giants in our solar system—that are near their star. But the available techniques leave us effectively blind to the planets most like Earth.

However, before they can hunt for Earth-like exoplanets, researchers must solve the unique challenges of getting a working starshade in orbit. A planet can be billions of times dimmer than the star, and because of the vast distances between us and other solar systems, planets and their sun are almost indistinguishable specks. To get the right lighting, scientists must place the starshade in front of the star without accidentally covering the planet right next to it.

They must also account for the fact that light sometimes deviates from a straight-line path. Light travelling from one medium, like air to water or thin air to dense air, shifts its direction (stars “twinkle” because of these distortions occurring as its light travels through Earth’s atmosphere). Light also changes its direction by bending around the edges of objects—including the edges of the starshade.

Combining all the known constraints gave Mather and his colleagues strict requirements for designing a starshade to work with a telescope on the ground.

“It needs to be a pointy sunflower, 100 meters in diameter, located at least 175,000 kilometers away from us in orbit around the Earth,” Mather said. “So that's huge. And the normal ways we would build something like that would make it also very heavy.”

The petals of the massive flower shape that researchers have settled on ensure the stray light deflected around them doesn’t get sent toward the center of a telescope. But the potential bulk of the structure has a cost; heavy satellites are expensive to launch and difficult to maneuver into position. So now Mather and his colleagues are brainstorming ways to make the starshade as light as possible.

One of the approaches they are considering is making it inflatable: Cut a sheet into the right shape and make a balloon frame to support it. But the approach leaves them concerned about the whole thing popping. While space is mostly empty, there are small objects—micrometeorites—zipping around, and over time collisions happen. So Mather and his colleagues also need to make the starshade durable.

A key idea they are pursuing is sending up multiple layers of sheets so that when a micrometeorite slams through them, the different layers can still block out most of a star’s light. It’s only an issue if the star’s light happens to follow the exact same trajectory as one of the micrometeorites. However, the team still needs a way to reinforce the inflatable framework to survive collisions.

The team is considering building the frame using resins or other materials that could undergo a chemical transformation into a sturdy structure after being deployed into shape. Another idea they are playing with is to deflate the starshade when it is not in use so that it is a smaller target and will get hit less often.

While developing the starshade, Mather is also pursuing related projects, like putting a stable standard light source—an artificial star—in orbit to aid ground-based telescopes. Having a steady light at a known brightness in the sky can help astronomers study stars. Astronomers don’t always know the actual brightness of objects they see through telescopes, and analysis is complicated because the atmosphere distorts the light before it reaches the telescope. Having a steady light above the atmosphere gives astronomers a point of comparison for determining the true brightness of what they observe. More importantly, it can also help them reverse engineer the distortions of the atmosphere and piece together the original image.

This technique will support future experiments using orbiting starshades since any light from the planet that reaches the ground will be distorted and require correction. Mather is part of a project led by George Mason University researchers that plans to put an artificial star into orbit in 2029.

Mather is also throwing his support behind other projects that are further into their development, like the Black Hole Explorer, which aims to observe light that has orbited black holes. While Mather’s various projects generally look into the far reaches of space, he’s still invested in learning about our home. Both Mather’s past and upcoming work explore our origins as they open up the wider universe to us.

“We actually said we were going to try to discover our own history by looking at the history of other places,” Mather said. “So what's the history of our own galaxy? Well, you can't really tell, but you can look at the formation of galaxies. You can look back in time by looking at things that are far away. So we're getting a photo album of ourselves by looking at our cousins way out there and seeing what were they like when they were young.”

Written by Bailey Bedford

 

Researchers Spy Finish Line in Race for Majorana Qubits

Our computer age is built on a foundation of semiconductors. As researchers and engineers look toward a new generation of computers that harness quantum physics, they are exploring various foundations for the burgeoning technology.

Almost every computer on earth, from a pocket calculator to the biggest supercomputer, is built out of transistors made from a semiconductor (generally silicon). Transistors store and manipulate data as “bits” that can be in one of two possible states—the ones and zeros of binary code. Many other things, from vacuum tubes to rows of dominos, can theoretically serve as bits, but transistors have become the dominant platform since they are convenient, compact and reliable.Researchers have been working to demonstrate that devices that combine semiconductors and superconductors, like this one made by Microsoft, have the potential to be the basis for a new type of qubit that can open the way to scalable quantum computers. (Credit: John Brecher, Microsoft)Researchers have been working to demonstrate that devices that combine semiconductors and superconductors, like this one made by Microsoft, have the potential to be the basis for a new type of qubit that can open the way to scalable quantum computers. (Credit: John Brecher, Microsoft)

Quantum computers promise unprecedented computational capabilities by replacing bits with quantum bits—qubits. Like normal bits, qubits have two states used to represent information. But they have additional power since they can simultaneously explore both possibilities during calculations through the phenomenon of quantum superpositions. The information stored in superpositions, along with other quantum effects like entanglement, enables new types of calculations that can solve certain problems much faster than normal computers.

All quantum objects sometimes enter a superposition, but that doesn’t mean they all make practical qubits. A qubit needs certain traits to be useful. First, it must have states that are easy to identify and manipulate. Then, the superposition of those states must last long enough to perform calculations. Physicists typically think about the stability of a quantum state in terms of its coherence, and how long a quantum state can remain coherent is a critical metric for judging its suitability as a qubit.

So far, no platform has emerged as the default for making qubits the way silicon-based transistors did for bits. As researchers and companies build the early generations of quantum computers, many options are being used, including superconducting circuits, trapped ions, and even particles of light. 

Some researchers, including Professor and JQI Fellow Sankar Das Sarma and Professor and JQI Co-Director Jay Sau, have been exploring the possibility that semiconductors might prove to be a good foundation for quantum computing as well. In 2010, Das Sarma, Sau and JQI postdoctoral researcher Roman Lutchyn proposed that a strong magnetic field and a nanowire device made from the combination of a semiconductor with a superconductor could be used to create a particle-like quantum object—a quasiparticle—called a Majorana. Nanowires hosting Majorana quasiparticles should be able to serve as qubits and come with an intrinsic reliability enforced by the laws of physics. However, no one has definitively demonstrated even a single Majorana qubit while some quantum computers built using other platforms already contain more than 1,100 qubits.

Fifteen years after Das Sarma and Sau opened the door to experiments searching for Majorana quasiparticles, they are optimistic that the proof of Majorana-based qubits is now on the horizon. In an article published on June 18 in the journal Physical Review B, Das Sarma and Sau used theoretical simulations to analyze cutting-edge experiments hunting for Majoranas and proposed an experiment to finally demonstrate Majorana qubits.

“I think that we're at the point where we can at least see where we are in terms of qubits,” says Sau, who is also professor of physics at UMD. “It might still be a long road. It's not going to be easy, but at least we can see the goal now.”

The Tortoise and the Hare?

Since quantum computers are already being built from other types of qubits, Majoranas are behind in the race. But they have some advantages that proponents, including Das Sarma and Sau, hope will let them make up for lost time and become the preferred foundation for quantum computers. 

One advantage Majorana qubits potentially have is the extensive infrastructure that already exists around semiconductors. The established knowledge and fabrication techniques from the computer industry might allow large quantum computers with many interacting qubits to be built using Majoranas (which is a goal all the competition is currently working to achieve). If Majorana-based quantum computers prove easier to scale up than their competition that might allow them to make up for lost time and be a competitive technology.

But the thing that really sets Majorana qubits apart from the competition is that their properties should protect from errors that would derail calculations. In general, the superpositions of quantum states are easily disrupted by outside influence, which makes them useless for quantum computing. However, Majorana quasiparticles are a type of topological state, which means they have traits that the rules of physics say can only be altered by a dramatic change to the whole system and are impervious to minor influences. This “topological protection” might be harnessed to make Majorana qubits more stable than their competition.

Pursuing these potential advantages, Microsoft has been trying to develop a Majorana-based quantum computer, and in February of this year, Microsoft researchers shared new results in the journal Nature where they described their observations of the two distinct quantum states of their device designed to create Majorana quasiparticles. At the same time, they also announced additional claims of having Majorana-based qubits, which has sparked controversy. In July, Microsoft researchers posted an article on the arXiv preprint server that elaborates on their claims of creating a Majorana qubit using their device. They observe results consistent with creating a short-lived Majorana qubit, but they acknowledge in the article that the type of experiment they performed cannot prove that Majoranas are the explanation.

The experiment described in the Nature article opened new measurement opportunities by introducing an additional component, called a quantum dot, that connects to the nanowire. The quantum dot serves as a bridge that quantum particles can travel across but never stop on—a process called quantum tunneling. This development allowed them, for the first time, to hunt for Majorana quasiparticles that might be useful in making qubits. Their experiment demonstrated that they could observe two distinct states where they expect Majoranas to reside in the device, which hints at the necessary ingredients of a qubit, but didn’t prove that Majorana quasiparticles were successfully created. 

In their new article, Das Sarma and Sau analyzed Microsoft’s published data and found that the measurements described in the paper could not prove if they had an ideal Majorana quasiparticle with its guaranteed stability or if the presence of impurities in the device resulted in imperfect Majoranas that produce only some of the signs of a topological state. The imperfect Majoranas could still offer some improved stability but don’t carry the ironclad guarantee of topological protection. Even if the device didn’t hold ideal Majoranas, it still might be able to function as a qubit with useful properties. 

“We were excited that they're now actually getting into a qubit regime,” Sau says. “One of the things we wanted to do in our analysis is be agnostic to whether it's Majoranas or not. You have this device. Let's just ask ‘What kind of a qubit coherence would one get from a qubit made out of this device?’ And that's really the important question in this field.”

The Road Forward

To investigate the coherence of potential Majorana qubits, Das Sarma and Sau created a basic theoretical model of Microsoft’s device and used it to simulate nanowires that harbored various amounts of disorder—disorder that might disrupt the formation of Majoranas. Their simulations indicate that Microsoft’s device is likely plagued by a level of disorder that would prevent it from having a competitively long coherence time. However, the simulations also indicate that if enough disorder is eliminated then the device could see a dramatic jump in how long it maintains its coherence. A clear measurement of coherence is likely to be the smoking gun that leads many researchers to accept that a Majorana qubit has been fabricated. 

As researchers attempt to demonstrate the coherence of a Majorana qubit, there are two ways they can respond to the presence of disorder in their devices: eliminate it or work around it. 

In the article, Das Sarma and Sau used their simulation to describe experiments that require devices with fewer defects and that could clearly demonstrate the coherence of the Majorana quasiparticle. The experiment would require researchers to use two nanowires that can each contain a Majorana and that are connected via quantum tunneling. The tunneling would allow researchers to create a superposition where the Majorana is in a mixture of the two possible locations. When there is a superposition between the two states, the researchers can make the probability of the Majorana being in each spot oscillate so it goes from more to less likely in each location. This back and forth is predicted to provide a clear sign of how long the quantum state remains coherent but requires work in improving the nanowire quality to achieve the results.

The pair also discussed an alternative approach using simpler experiments that are similar to the one published in Nature. This approach continues to focus on measuring if a particle exists in a particular wire. The simulations Das Sarma and Sau performed suggest that if the measurement techniques were improved sufficiently the technique could give an estimate of the coherence, but it will be trickier to pick out the signature from the noise than using their proposed experiment. And the effort improving the measurement would likely not translate into making the nanowire into a Majorana qubit with a competitive coherence time.

“We can now start to see the finish line,” Sau says. “The experiments don't tell us quite where we are; that's where simulations are useful. It’s telling us that disorder is one of the key bottlenecks for this. There are various options of which path you can take, and the harder and more rewarding path is to just work on improving disorder.” 

Original story by Bailey Bedford: jqi.umd.edu/news/researchers-spy-finish-line-race-majorana-qubits

 

Superconductivity’s Halo: Physicists Map Rare High-field Phase

 A puzzling form of superconductivity that arises only under strong magnetic fields has been mapped and explained by a research team of UMD, NIST and Rice University including  professor of physics and astronomy at Rice University. Their findings,  published in Science July 31, detail how uranium ditelluride (UTe2) develops a superconducting halo under strong magnetic fields. 

Traditionally, scientists have regarded magnetic fields as detrimental to superconductors. Even moderate magnetic fields typically weaken superconductivity, while stronger ones can destroy it beyond a known critical threshold. However, UTe2 challenged these expectations when, in 2019, it was discovered to maintain superconductivity in critical fields hundreds of times stronger than those found in conventional materials. Image by Sylvia Klare Lewin, Nicholas P Butch/ NIST & UMDImage by Sylvia Klare Lewin, Nicholas P Butch/ NIST & UMD

“When I first saw the experimental data, I was stunned,” said Andriy Nevidomskyy, a member of the Rice Advanced Materials Institute and the Rice Center for Quantum Materials. “The superconductivity was first suppressed by the magnetic field as expected but then reemerged in higher fields and only for what appeared to be a narrow field direction. There was no immediate explanation for this puzzling behavior." 

Superconducting resurrection in high fields

This phenomenon, initially identified by researchers at the University of Maryland Quantum Materials Center and the National Institute of Standards and Technology (NIST), has captivated physicists worldwide. In UTe2, superconductivity vanished below 10 Tesla, a field strength that is already immense by conventional standards, but surprisingly reemerged at field strengths exceeding 40 Tesla. 

This unexpected revival has been dubbed the Lazarus phase. Researchers determined that this phase critically depends on the angle of the applied magnetic field in relation to the crystal structure. 

In collaboration with experimental colleagues at UMD and NIST, Nevidomskyy decided to map out the angular dependence of this high-field superconducting state. Their precise measurements revealed that the phase formed a toroidal, or doughnutlike, halo surrounding a specific crystalline axis. 

“Our measurements revealed a three-dimensional superconducting halo that wraps around the hard b-axis of the crystal,” said Sylvia Lewin of NIST, a co-lead author on the study. “This was a surprising and beautiful result.”

Building theory to fit halo

To explain these findings, Nevidomskyy developed a theoretical model that accounted for the data without relying heavily on debated microscopic mechanisms. His approach employed an effective phenomenological framework with minimal assumptions about the underlying pairing forces that bind electrons into Cooper pairs. 

The model successfully reproduced the nonmonotonic angular dependence observed in experiments, offering insights into how the orientation of the magnetic field influences superconductivity in UTe2. 

Deeper understanding of interplay

The research team found that the theory, fitted with a few key parameters, aligned remarkably well with the experimental features, particularly the halo’s angular profile. A key insight from the model is that Cooper pairs carry intrinsic angular momentum like a spinning top does in classical physics. The magnetic field interacts with this momentum, creating a directional dependence that matches the observed halo pattern. 

This work lays the foundation for a deeper understanding of the interplay between magnetism and superconductivity in materials with strong crystal anisotropy like UTe2. 

“One of the experimental observations is the sudden increase in the sample magnetization, what we call a metamagnetic transition,” said NIST’s Peter Czajka, co-lead author on the study. “The high-field superconductivity only appears once the field magnitude has reached this value, itself highly angle-dependent.” 

The exact origin of this metamagnetic transition and its effect on superconductivity is hotly debated by scientists, and Nevidomskyy said he hopes this theory would help elucidate it. 

“While the nature of the pairing glue in this material remains to be understood, knowing that the Cooper pairs carry a magnetic moment is a key outcome of this study and should help guide future investigations,” he said.

Co-authors of this study include Corey Frank and Nicholas Butch from NIST; Hyeok Yoon, Yun Suk Eo, Johnpierre Paglione and Gicela Saucedo Salas from UMD; and G. Timothy Noe and John Singleton from the Los Alamos National Laboratory. This research was supported by the U.S. Department of Energy and the National Science Foundation.

 Original article: https://news.rice.edu/news/2025/superconductivitys-halo-rice-theoretical-physicist-helps-map-rare-high-field-phase

New Protocol Demonstrates and Verifies Quantum Speedups in a Jiffy

While breakthrough results over the past few years have garnered headlines proclaiming the dawn of quantum supremacy, they have also masked a nagging problem that researchers have been staring at for decades: Demonstrating the advantages of a quantum computer is only half the battle; verifying that it has produced the right answer is just as important.

Now, researchers at JQI and the University of Maryland (UMD) have discovered a new way to quickly check the work of a quantum computer. They proposed a novel method to both demonstrate a quantum device’s problem-solving power and verify that it didn’t make a mistake. They described their protocol in an article published March 5, 2025, in the journal PRX Quantum.

“Perhaps the main reason most of us are so excited about studying large interacting quantum systems in general and quantum computers in particular is that these systems cannot be simulated classically,” says JQI Fellow Alexey Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS), a senior investigator at the National Science Foundation Quantum Leap Challenge Institute for Robust Quantum Simulation (RQS) and a physicist at the National Institute of Standards and Technology. “Coming up with ways to check that these systems are behaving correctly without being able to simulate them is a fun and challenging problem.”Researchers have proposed a new way to both demonstrate and verify that quantum devices offer real speedups over ordinary computers. Their protocol might be suitable for near-term devices made from trapped ions or superconducting circuits, like the one shown above. (Credit: Kollár Lab/JQI)Researchers have proposed a new way to both demonstrate and verify that quantum devices offer real speedups over ordinary computers. Their protocol might be suitable for near-term devices made from trapped ions or superconducting circuits, like the one shown above. (Credit: Kollár Lab/JQI)

In December 2024, Google announced its newest quantum chip, called Willow, accompanied by a claim that it had performed a calculation in five minutes that would have taken the fastest supercomputers 10 septillion years. That disparity suggested a strong demonstration of a quantum advantage and hinted at blazing fast proof that quantum computers offer exponential speedups over devices lacking that quantum je ne sais quoi.

But the problem that the Willow chip solved—a benchmark called random circuit sampling that involves running a random quantum computation and generating many samples of the output—is known to be hard to verify without a quantum computer. (Hardness in this context means that it would take a long time to compute the verification.) The Google team verified the solutions produced by their chip for small problems (problems with just a handful of qubits) using an ordinary computer, but they couldn’t come close to verifying the results of the 106-qubit problem that generated the headlines.

Fortunately, researchers have also discovered easy-to-verify problems that can nevertheless demonstrate quantum speedups. Such problems are hard for a classical (i.e., non-quantum) computer but easy for a quantum computer, which makes them prime candidates for showing off quantum prowess. Crucially, these problems also allow a classical computer to quickly check the work of the quantum device.

Even so, not every problem with these features is practical for the quantum computers that exist right now or that will exist in the near future. In their new paper, the authors combined two key earlier results to construct a novel protocol that is more suitable for demonstrating and verifying the power of soon-to-be-built quantum devices.

One of the earlier results identified a suitable problem with the right balance of being difficult to solve but easy to verify. Solving that problem amounts to preparing the lowest energy state of a simple quantum system, measuring it, and reporting the outcomes. The second earlier result described a generic method for verifying a quantum computation after it has been performed—a departure from standard methods that require a live back-and-forth while the computation is running. Together, the two results combined to significantly cut down the number of repetitions needed for verification, from an amount that grows as the square of the number of qubits down to a constant amount that doesn’t grow at all.

“We combined them together and, somewhat unexpectedly, this also reduced the sample complexity to a really low level,” says Zhenning Liu, the lead author of the new paper and a graduate student at QuICS.

The resulting protocol can run on any sufficiently powerful quantum computer, but its most natural implementation is on a particular kind of device called an analog quantum simulator.

Generally, quantum computers, which process information held by qubits, fall into two categories. There are digital quantum computers, like Google’s Willow chip, that run sequences of quantum instructions and manipulate qubits with discrete operations, similar to what ordinary digital computers do to bits. And then there are analog quantum computers that initialize qubits and let them evolve continuously. An analog quantum simulator is a special-purpose analog quantum computer. 

Liu and his colleagues—inspired by the kinds of quantum devices that are already available and driven by one of the primary research goals of RQS—focused on demonstrating and verifying quantum advantage on a subset of analog quantum simulators.

In particular, their protocol is tailored to analog quantum simulators capable of hosting simple nearest-neighbor interactions between qubits and making quantum measurements of individual qubits. These capabilities are standard fare for many kinds of experimental qubits built out of trapped ions or superconductors, but the researchers required one more ingredient that might be harder to engineer: an interaction between one special qubit—called the clock qubit—and all of the other qubits in the device.

“Quantum simulators will only be useful if we can be confident about their results,” says QuICS Fellow Andrew Childs, who is also the director of RQS and a professor of computer science at UMD. “We wanted to understand how to do this with the kind of simulators that can be built today. It's a hard problem that has been a lot of fun to work on.”

Assuming an analog quantum simulator with all these capabilities could be built, the researchers described a protocol to efficiently verify its operation by following a classic two-party tale in computer science. One party, the prover, wants to convince the world that their quantum device is the real deal. A second party, the verifier, is a diehard skeptic without a quantum computer who wants to challenge the prover and ascertain whether they are telling the truth.

In the future, a practical example of this kind of interaction might be a customer accessing a quantum computer in a data center that can only be reached via the cloud. In that setting, customers might want a way to check that they are really using a quantum device and aren’t being scammed. Alternatively, the authors say the protocol could be useful to scientists who want to verify that they’ve really built a quantum simulator in their lab. In that case, the device would be under the control of a researcher doing double duty as both verifier and prover, and they could ultimately prove to themselves and their colleagues that they’ve got a working quantum computer.

In either case, the protocol goes something like this. First, the verifier describes a specific instance of the problem and an initial state. Then, they ask the prover to use that description to prepare a fixed number of final states. The correct final state is unknown to the verifier, but it is closely related to the original problem of finding the lowest energy state of a simple quantum system. The verifier also chooses how certain they want to be about whether the prover has a truly quantum device, and they can guarantee a desired level of certainty by adjusting the number of final states that they ask the prover to prepare.

For each requested state, the verifier flips a coin. If it comes up heads, the verifier’s goal is to collect a valid solution to the problem, and they ask the prover to measure all the qubits and report the results. Based on the measurement of the special clock qubit, the verifier either throws the results away or stores them for later. Measuring the clock qubit essentially lets the verifier weed out invalid results. The results that get stored are potentially valid solutions, which the verifier will publish at the end of the protocol if the prover passes the rest of the verification.

If the coin comes up tails, the verifier’s goal is to test that the prover is running the simulation correctly. To do this, the verifier flips a second coin. If that coin comes up heads, the verifier asks the prover to make measurements that check whether the input state is correct. If the coin comes up tails, the verifier asks the prover to make measurements that reveal whether the prover performed the correct continuous evolution. 

The verifier then uses all the results stemming from that second coin flip to compute two numbers. In the paper, the team calculated thresholds for each number that separate fraudulent provers from those with real quantum-powered devices. If the two numbers clear those thresholds, the verifier can publish the stored answers, confident in the fact that the prover is telling the truth about their quantum machine.

There is a caveat to the protocol that limits its future use by a suspicious customer of a quantum computing cloud provider. The protocol assumes that the prover is honest about which measurements they make—it assumes that they aren’t trying to pull one over on the verifier and that they make the measurements that the verifier requests. The authors describe a second version of the protocol that parallels the first and relaxes this element of trust. In that version, the prover doesn't measure the final states but instead transmits them directly to the verifier as quantum states—a potentially challenging technical feat. With the states under their control, the verifier can flip the coins and make the measurements all on their own. This is why the protocol can still be useful for researchers trying to put their own device through its paces and demonstrate near-term quantum speedups in their labs.

Ultimately the team would love to relax the requirement that the prover is trusted to make the right measurements. But progress toward this more desirable feature has been tough to find, especially in the realm of quantum simulation.

“That's a really hard problem,” Liu says. “This is very, very nontrivial work, and, as far as I know, all work that has this feature relies on some serious cryptography. This is clearly not easy to do in quantum simulations.”

Original story by Chris Cesare: https://jqi.umd.edu/news/new-protocol-demonstrates-and-verifies-quantum-speedups-jiffy

In addition to Gorshkov, Zhenning Liu, and Childs the paper had several other authors: Dhruv Devulapalli, a graduate student in physics at UMD; Dominik Hangleiter, a former QuICS Hartree Postdoctoral Fellow who is now a Quantum Postdoctoral Fellow at the Simons Institute for the Theory of Computing at the University of California, Berkeley; Yi-Kai Liu, who is a QuICS Fellow and a senior investigator at RQS; and JQI Fellow Alicia Kollár, who is also a senior investigator at RQS.