Beyond Higgs: The Search for New Particles That Could Solve Mysteries of the Universe

An elusive elementary particle called the Higgs boson is partly to thank for life as we know it. No other elementary particles in the early universe had mass until they interacted with a field associated with the Higgs boson, enabling the emergence of planets, stars and—billions of years later—us.

Despite its cosmic importance, scientists couldn’t prove the Higgs boson even existed until 2012, when they smashed protons together at the most powerful particle accelerator ever built: the Large Hadron Collider (LHC). A decade later, this massive machine, built in a tunnel beneath the France-Switzerland border, is up and running again after a series of upgrades. 

As the search for new particles starts anew, researchers like Assistant Professor of Physics Manuel Franco Sevilla find themselves wondering if they will discover anything beyond Higgs.UMD Assistant Professor of Physics Manuel Franco Sevilla is helping to upgrade the Large Hadron Collider (LHC) at CERN in Switzerland. In the above photo, he is shining a light on silicon sensors to measure whether their dark current increases—a sign that they are properly connected. Photo courtesy of Manuel Franco Sevilla.UMD Assistant Professor of Physics Manuel Franco Sevilla is helping to upgrade the Large Hadron Collider (LHC) at CERN in Switzerland. In the above photo, he is shining a light on silicon sensors to measure whether their dark current increases—a sign that they are properly connected. Photo courtesy of Manuel Franco Sevilla.

“It’s a tough question because particle physics is at a juncture,” said Franco Sevilla, who is working at the LHC this semester. “It’s possible that we are currently in what physicists call a ‘nightmare scenario,’ where we discovered the Higgs, but after that, there’s a big desert. According to this theory, we will not be able to find anything new with our current technology.”

This is the worst-case scenario, but not the only possible outcome. Some scientists say the LHC could make another discovery on par with the Higgs boson, potentially identifying new particles that explain the origin of dark matter or the mysterious lack of antimatter throughout the universe. Others say new colliders—more powerful and precise than their predecessors—must be built to bring the field into a new era.

There are no easy answers, but Franco Sevilla and several other faculty members in UMD’s Department of Physics are rising to the challenge. Some are working directly with the LHC and future collider proposals, while others are developing theories that could solve life’s biggest mysteries. All of them, in their own way, are advancing the ever-changing field of particle physics.

Looking For Beauty

As Franco Sevilla will tell you, there’s beauty all around—if you know where to look. He is one of the researchers who uses the LHC’s high-powered collisions to study a particle called the beauty quark—b quark for short. It’s one of the components of “flavor physics,” which observes the interactions between six “flavors,” or varieties, of elementary particles called quarks and leptons. 

Because these particles sometimes behave in unexpected ways, Franco Sevilla believes that flavor physics might lead to breakthroughs that justify future studies in this field—and possibly even the need for a new particle collider. 

“Some of the most promising things that will break the ‘nightmare scenario’ and allow us to find something are coming from flavor physics,” he said. “We still haven't fully discovered something new, but we have a number of hints, including the famous ‘b anomalies.’”

These anomalies are instances where the b quark decayed differently than predicted by the Standard Model, the prevailing theory of particle physics. This suggests that something might exist beyond this model—which, ever since the 1970s, has helped explain the fundamental particles and forces that shape our world.

Some mysteries remain, though. Physicists still don’t know the origin of dark matter—an enigmatic substance that exerts a gravitational force on visible matter—or why there’s so much matter and so little antimatter in the universe.

Sarah Eno, a UMD physics professor who has conducted research at particle accelerators around the world, including the LHC, said these answers will only come from collider experiments.Sarah Eno, a physics professor at UMD, sits atop a model of a Large Hadron Collider (LHC) dipole magnet at CERN about 10 years ago. At the time, she was participating in LHC experiments and frequently spent her summers at the lab in Switzerland. Credit: Meenakshi Narain.Sarah Eno, a physics professor at UMD, sits atop a model of a Large Hadron Collider (LHC) dipole magnet at CERN about 10 years ago. At the time, she was participating in LHC experiments and frequently spent her summers at the lab in Switzerland. Credit: Meenakshi Narain.
“What is the nature of dark matter? Nobody has any idea,” Eno said. “We know it interacts via gravity, but we don’t know whether it has any other kinds of interactions. And only an accelerator can tell us that.”

Better, Faster, Stronger

After the third (and current) run of the LHC ends, the collider’s accelerator will be upgraded in 2029 to “crank up the performance,” according to the European Organization for Nuclear Research (CERN), which houses the collider.

With the added benefit of stronger magnets and higher-intensity beams, this final round of experiments could be a game-changer. But once that ends, Eno said the LHC will have reached its limit in terms of energy output, lowering the odds of any new discoveries. The logical next step, she said, would be to construct a new collider capable of propelling the field of particle physics forward.

“The field is now trying to decide what the next machine is,” Eno said.

Around the globe, there are various proposals on the table. There is a significant push for another proton collider in the same vein as the LHC, except bigger and more powerful. Electron-positron colliders—both linear and circular in shape—have also been proposed in China, Japan and Switzerland.

Eno is one of the physicists leading the charge for the construction of an electron-positron collider at CERN. It would be the first stage of the proposed Future Circular Collider (FCC), which would be four times longer than the LHC. By the late 2050s, it would be upgraded to a proton collider, with an energy capacity roughly seven times that of the LHC. Eno, who was appointed one of the U.S. representatives for this project, said the electron-positron collider would allow physicists to study the Higgs boson with significantly higher precision.

“When an electron and positron annihilate, all their energy becomes new states of matter,” Eno said. “This means that when you’re trying to reconstruct the final state, you know the total energy of that final state. This allows you to do much more precise measurements.”

This proposal does come with some challenges. Circular colliders radiate off a lot of energy, making it difficult to accelerate electrons to high energy. Eno said this can be avoided by building a massive collider with a long tunnel (to the tune of 62 miles, in the case of the FCC), preventing electrons from losing steam as they whip around sharp corners.

The radiation problem has pushed some physicists towards another theoretical possibility: a muon collider. Muons are subatomic particles that are like electrons, but 207 times heavier, which keeps them from radiating as much. This would make them ideal candidates for collider research—if only they didn’t decay in 2.2 microseconds.

“If you talk to the muon collider proponents, their faces light up because it’s such a challenge,” Eno said. “And who doesn’t like a challenge?”

Clean Collisions 

One of Eno’s colleagues at UMD—Distinguished University Professor and theoretical physicist Raman Sundrum—endorses the muon collider idea. So much so that he and a team of physicists wrote a paper titled “The muon smasher’s guide,” which appeared in Reports on Progress in Physics in July 2022.

“We build colliders not to confirm what we already know, but to explore what we do not,” the research team wrote in their paper. “In the wake of the Higgs boson’s discovery, the question is not whether to build another collider, but which collider to build.”

They made the case for the world’s first muon smasher, arguing that these collisions would be “far cleaner” than proton collisions, which occur at the LHC. Unlike protons—a composite object made of quarks and gluons—muons are elementary particles with no smaller components. This would let physicists see only what they want to see, without any distractions.

“Muon collisions would make it easier to diagnose what’s going on,” Sundrum said. “When something extraordinary happens, it doesn’t get dwarfed by all of the mundane crashing of many parts.”

Considering that the Higgs boson only appears once in about a billion collisions at the LHC, this level of clarity and precision could make a world of difference. However, the more the field of particle physics advances, the more challenging it is to find something new.

“The Higgs was a needle in the haystack, but discovering newer particles could be even subtler and harder,” Sundrum said.Artistic rendering of the Higgs field. Credit: CERNArtistic rendering of the Higgs field. Credit: CERN

Despite these challenges, the LHC could still make a major discovery. Sundrum continues to develop theories that guide and inspire the field, including the idea that the LHC could find a “parent particle” that gave rise to all protons in the universe. If this comes to fruition, it would be worth building new colliders that could validate the LHC’s initial findings and provide a more complete picture of why matter dominates over antimatter in the universe, Sundrum said.

In the coming years and decades, physicists will continue to debate the pros and cons of various collider proposals. The outcome will depend partly on scientific advancements, and partly on political will and funding. Sundrum said it’s not cheap to build a collider—with some projects expected to cost $10 billion—but the discoveries that could come from these experiments are priceless.

“An enormous number of people find it very moving and interesting to know what it’s all about, in terms of where the universe came from, what it means and how the laws work,” Sundrum said. “Individually these experiments are expensive, but as a planet, I think we can easily afford to do it.”

Written by Emily Nunez

Compact Electron Accelerator Reaches New Speeds with Nothing But Light

Scientists harnessing precise control of ultrafast lasers have accelerated electrons over a 20-centimeter stretch to speeds usually reserved for particle accelerators the size of 10 football fields.

A team at the University of Maryland (UMD) headed by Professor of Physics and Electrical and Computer Engineering Howard Milchberg, in collaboration with the team of Jorge J. Rocca at Colorado State University (CSU), achieved this feat using two laser pulses sent through a jet of hydrogen gas. The first pulse tore apart the hydrogen, punching a hole through it and creating a channel of plasma. That channel guided a second, higher power pulse that scooped up electrons out of the plasma and dragged them along in its wake, accelerating them to nearly the speed of light in the process. With this technique, the team accelerated electrons to almost 40% of the energy achieved at massive facilities like the kilometer-long Linac Coherent Light Source (LCLS), the accelerator at SLAC National Accelerator Laboratory. The paper was published in the journal Physical Review X on September 16, 2022

“This is the first multi-GeV electron accelerator powered entirely by lasers,” says Milchberg, who is also affiliated with the Institute of Research Electronics and Applied Physics at UMD. “And with lasers becoming cheaper and more efficient, we expect that our technique will become the way to go for researchers in this field.”  An image from a simulation in which a laser pulse (red) drives a plasma wave, accelerating electrons in its wake. The bright yellow spot is the area with the highest concentration of electrons. In an experiment, scientists used this technique to accelerate electrons to nearly the speed of light over a span of just 20 centimeters. (Credit Bo Miao/IREAP) An image from a simulation in which a laser pulse (red) drives a plasma wave, accelerating electrons in its wake. The bright yellow spot is the area with the highest concentration of electrons. In an experiment, scientists used this technique to accelerate electrons to nearly the speed of light over a span of just 20 centimeters. (Credit Bo Miao/IREAP)

Motivating the new work are accelerators like LCLS, a kilometer-long runway that accelerates electrons to 13.6 billion electron volts (GeV)—the energy of an electron that’s moving at 99.99999993% the speed of light. LCLS’s predecessor is behind three Nobel-prize-winning discoveries about fundamental particles. Now, a third of the original accelerator has been converted to the LCLS, using its super-fast electrons to generate the most powerful X-ray laser beams in the world. Scientists use these X-rays to peer inside atoms and molecules in action, creating videos of chemical reactions. These videos are vital tools for drug discovery, optimized energy storage, innovation in electronics, and much more.  

Accelerating electrons to energies of tens of GeV is no easy feat. SLAC’s linear accelerator gives electrons the push they need using powerful electric fields propagating in a very long series of segmented metal tubes. If the electric fields were any more powerful, they would set off a lightning storm inside the tubes and seriously damage them. Being unable to push electrons harder, researchers have opted to simply push them for longer, providing more runway for the particles to accelerate. Hence the kilometer-long slice across northern California. To bring this technology to a more manageable scale, the UMD and CSU teams worked to boost electrons to nearly the speed of light using—fittingly enough—light itself.

“The goal ultimately is to shrink GeV-scale electron accelerators to a modest size room,” says Jaron Shrock, a graduate student in physics at UMD and co-first author on the work. “You’re taking kilometer-scale devices, and you have another factor of 1000 stronger accelerating field. So, you’re taking kilometer-scale to meter scale, that’s the goal of this technology.”

Creating those stronger accelerating fields in a lab employs a process called laser wakefield acceleration, in which a pulse of tightly focused and intense laser light is sent through a plasma, creating a disturbance and pulling electrons along in its wake. 

“You can imagine the laser pulse like a boat,” says Bo Miao, a postdoctoral fellow in physics at the University of Maryland and co-first author on the work. “As the laser pulse travels in the plasma, because it is so intense, it pushes the electrons out of its path, like water pushed aside by the prow of a boat. Those electrons loop around the boat and gather right behind it, traveling in the pulse’s wake.”

Laser wakefield acceleration was first proposed in 1979 and demonstrated in 1995. But the distance over which it could accelerate electrons remained stubbornly limited to a couple of centimeters. What enabled the UMD and CSU team to leverage wakefield acceleration more effectively than ever before was a technique the UMD team pioneered to tame the high-energy beam and keep it from spreading its energy too thin. Their technique punches a hole through the plasma, creating a waveguide that keeps the beam’s energy focused.

“A waveguide allows a pulse to propagate over a much longer distance,” Shrock explains. “We need to use plasma because these pulses are so high energy, they're so bright, they would destroy a traditional fiber optic cable. Plasma cannot be destroyed because in some sense it already is.”

Their technique creates something akin to fiber optic cables—the things that carry fiber optic internet service and other telecommunications signals—out of thin air. Or, more precisely, out of carefully sculpted jets of hydrogen gas.

A conventional fiber optic waveguide consists of two components: a central “core” guiding the light, and a surrounding “cladding” preventing the light from leaking out. To make their plasma waveguide, the team uses an additional laser beam and a jet of hydrogen gas. As this additional “guiding” laser travels through the jet, it rips the electrons off the hydrogen atoms and creates a channel of plasma. The plasma is hot and quickly starts expanding, creating a lower density plasma “core” and a higher density gas on its fringe, like a cylindrical shell. Then, the main laser beam (the one that will gather electrons in its wake) is sent through this channel. The very front edge of this pulse turns the higher density shell to plasma as well, creating the “cladding.” 

“It's kind of like the very first pulse clears an area out,” says Shrock, “and then the high-intensity pulse comes down like a train with somebody standing at the front throwing down the tracks as it's going.” 

Using UMD’s optically generated plasma waveguide technique, combined with the CSU team’s high-powered laser and expertise, the researchers were able to accelerate some of their electrons to a staggering 5 GeV. This is still a factor of 3 less than SLAC’s massive accelerator, and not quite the maximum achieved with laser wakefield acceleration (that honor belongs to a team at Lawrence Berkeley National Labs). However, the laser energy used per GeV of acceleration in the new work is a record, and the team says their technique is more versatile: It can potentially produce electron bursts thousands of times per second (as opposed to roughly once per second), making it a promising technique for many applications, from high energy physics to the generation of X-rays that can take videos of molecules and atoms in action like at LCLS. Now that the team has demonstrated the success of the method, they plan to refine the setup to improve performance and increase the acceleration to higher energies.

“Right now, the electrons are generated along the full length of the waveguide, 20 centimeters long, which makes their energy distribution less than ideal,” says Miao. “We can improve the design so that we can control where they are precisely injected, and then we can better control the quality of the accelerated electron beam.”

While the dream of LCLS on a tabletop is not a reality quite yet, the authors say this work shows a path forward. “There’s a lot of engineering and science to be done between now and then,” Shrock says. “Traditional accelerators produce highly repeatable beams with all the electrons having similar energies and traveling in the same direction. We are still learning how to improve these beam attributes in multi-GeV laser wakefield accelerators. It’s also likely that to achieve energies on the scale of tens of GeV, we will need to stage multiple wakefield accelerators, passing the accelerated electrons from one stage to the next while preserving the beam quality. So there’s a long way between now and having an LCLS type facility relying on laser wakefield acceleration.” 

This work was supported by the U.S. Department of Energy (DE-SC0015516, LaserNetUS DE-SC0019076/FWP#SCW1668, and DE-SC0011375), and the National Science Foundation (PHY1619582 and PHY2010511).

======================

Story by Dina Genkina

In addition to Milchberg, Rocca, Shrock and Miao, authors on the paper included Linus Feder, formerly a graduate student in physics at UMD and now a postdoctoral researcher at the University of Oxford, Reed Hollinger, John Morrison, Huanyu Song, and  Shoujun Wang, all research scientists at CSU, Ryan Netbailo, a graduate student in electrical and computer engineering at CSU, and Alexander Picksley, formerly a graduate student in physics at the University of Oxford and now a postdoctoral researcher at Lawrence Berkeley National Lab. 

Quantum Computers Are Starting to Simulate the World of Subatomic Particles

There is a heated race to make quantum computers deliver practical results. But this race isn't just about making better technology—usually defined in terms of having fewer errors and more qubits, which are the basic building blocks that store quantum information. At least for now, the quantum computing race requires grappling with the complex realities of both quantum technologies and difficult problems. To develop quantum computing applications, researchers need to understand a particular quantum technology and a particular challenging problem and then adapt the strengths of the technology to address the intricacies of the problem.

Assistant Professor Zohreh Davoudi, a member of the Maryland Center for Fundamental Physics, has been working with multiple colleagues at UMD to ensure that the problems that she cares about are among those benefiting from early advances in quantum computing. The best modern computers have often proven inadequate at simulating the details that nuclear physicists need to understand our universe at the deepest levels.

Davoudi and JQI Fellow Norbert Linke are collaborating to push the frontier of both the theories and technologies of quantum simulation through research that uses current quantum computers. Their research is intended to illuminate a path toward simulations that can cut through the current blockade of fiendishly complex calculations and deliver new theoretical predictions. For example, quantum simulations might be the perfect tool for producing new predictions based on theories that combine Einstein’s theory of special relativity(link is external) and quantum mechanics to describe the basic building blocks of nature—the subatomic particles and the forces among them—in terms of “quantum fields(link is external).” Such predictions are likely to reveal new details about the outcomes of high-energy collisions in particle accelerators and other lingering physics questions.

Current quantum computers, utilizing technologies like the trapped ion device on the left, are beginning to tackle problems theoretical physicists care about, like simulating particle physics models. More than 60 years ago, the physicist Julian Schwinger laid the foundation for describing the relativistic and quantum mechanical behaviors of subatomic particles and the forces among them, and now his namesake model is serving as an early challenge for quantum computers. (Credit: Z. Davoudi/UMD with elements adopted from Emily Edwards/JQI (trapped ion device), Dizzo via Getty Images (abstract photon rays), and CERN (Schwinger photo)Current quantum computers, utilizing technologies like the trapped ion device on the left, are beginning to tackle problems theoretical physicists care about, like simulating particle physics models. More than 60 years ago, the physicist Julian Schwinger laid the foundation for describing the relativistic and quantum mechanical behaviors of subatomic particles and the forces among them, and now his namesake model is serving as an early challenge for quantum computers. (Credit: Z. Davoudi/UMD with elements adopted from Emily Edwards/JQI (trapped ion device), Dizzo via Getty Images (abstract photon rays), and CERN (Schwinger photo)

The team’s current efforts might help nuclear physicists, including Davoudi, to take advantage of the early benefits of quantum computing instead of needing to rush to catch up when quantum computers hit their stride. For Linke, who is also an assistant professor of physics at UMD, the problems faced by nuclear physicists provide a challenging practical target to take aim at during these early days of quantum computing.

In a new paper in PRX Quantum(link is external), Davoudi, Linke and their colleagues have combined theory and experiment to push the boundaries of quantum simulations—testing the limits of both the ion-based quantum computer in Linke’s lab and proposals for simulating quantum fields. Both Davoudi and Linke are also part of the NSF Quantum Leap Challenge Institute for Robust Quantum Simulation that is focused on exploring the rich opportunities presented by quantum simulations.

The new project wasn’t about adding more qubits to the computer or stamping out every source of error. Rather, it was about understanding how current technology can be tested against quantum simulations that are relevant to nuclear physicists so that both the theoretical proposals and the technology can progress in practical directions. The result was both a better quantum computer and improved quantum simulations of a basic model of subatomic particles

“I think for the current small and noisy devices, it is important to have a collaboration of theorists and experimentalists so that we can implement useful quantum simulations,” says JQI graduate student Nhung Nguyen, who was the first author of the paper. “There are many things we could try to improve on the experimental sides but knowing which one leaves the greatest impact on the result helps guides us in the right direction. And what makes the biggest impact depends a lot on what you try to simulate.”

The team knew the biggest and most rewarding challenges in nuclear physics are beyound the reach of current hardware, so they started with something a little simpler than reality: the Schwinger model. Instead of looking at particles in reality’s three dimensions evolving over time, this model pares things down to particles existing in just one dimension over time. The researchers also further simplified things by using a version of the model that breaks continuous space into discrete sites. So in their simulations, space only exist as one line of distinct sites, like a column cut off a chess board, and the particles are like pieces that must always reside in one square or another along that column.

Despite the model being stripped of so much of reality’s complexity, interesting physics can still play out in it. The physicist Julian Schwinger developed this simplified model of quantum fields to mimic parts of physics that are integral to the formation of both the nuclei at the centers of atoms and the elementary particles that make them up.

“The Schwinger model kind of hits the sweet spot between something that we can simulate and something that is interesting,” says Minh Tran, a MIT postdoctoral researcher and former JQI graduate student who is a coauthor on the paper. “There are definitely more complicated and more interesting models, but they're also more difficult to realize in the current experiments.”

In this project, the team looked at simulations of electrons and positrons(link is external)—the antiparticles of electrons—appearing and disappearing over time in the Schwinger model. For convenience, the team started the simulation with an empty space—a vacuum. The creation and annihilation of a particle and its antiparticle out of vacuum is one of the significant predictions of quantum field theory. Schwinger’s work establishing this description of nature earned him, alongside Richard Feynman and Sin-Itiro Tomonaga, the Nobel Prize in physics in 1965(link is external). Simulating the details of such fundamental physics from first principles is a promising and challenging goal for quantum computers.

Nguyen led the experiment that simulated Schwinger’s pair production on the Linke Lab quantum computer, which uses ions—charged atoms—as the qubits.

“We have a quantum computer, and we want to push the limits,” Nguyen says. “We want to see if we optimize everything, how long can we go with it and is there something we can learn from doing the experimental simulation.”

The researchers simulated the model using up to six qubits and a preexisting language of computing actions called quantum gates. This approach is an example of digital simulation. In their computer, the ions stored information about if particles or antiparticles exist at each site in the model, and interactions were described using a series of gates that can change the ions and let them influence each other.

In the experiments, the gates only manipulated one or two ions at a time, so the simulation couldn’t include everything in the model interacting and changing simultaneously. The reality of digital simulations demands the model be chopped into multiple pieces that each evolve over small steps in time. The team had to figure out the best sequence of their individual quantum gates to approximate the model changing continuously over time.

“You're just approximately applying parts of what you want to do bit by bit,” Linke says. “And so that's an approximation, but all the orderings—which one you apply first, and which one second, etc.—will approximate the same actual evolution. But the errors that come up are different from different orderings. So there's a lot of choices here.”

Many things go into making those choices, and one important factor is the model’s symmetries. In physics, a symmetry(link is external) describes a change that leaves the equations of a model unchanged. For instance, in our universe rotating only changes your perspective and not the equations describing gravity, electricity or magnetism. However, the equations that describe specific situations often have more restrictive symmetries. So if an electron is alone in space, it will see the same physics in every direction. But if that electron is between the atoms in a metal, then the direction matters a lot: Only specific directions look equivalent. Physicists often benefit from considering symmetries that are more abstract than moving around in space, like symmetries about reversing the direction of time.

The Schwinger model makes a good starting point for the team’s line of research because of how it mimics aspects of complex nuclear dynamics and yet has simple symmetries.

“Once we aim to simulate the interactions that are in play in nuclear physics, the expression of the relevant symmetries is way more complicated and we need to be careful about how to encode them and how to take advantage of them,” Davoudi says. “In this experiment, putting things on a one-dimensional grid is only one of the simplifications. By adopting the Schwinger model, we have also a greatly simplified the notion of symmetries, which end up becoming a simple electric charge conservation. In our three-dimensional reality though, those more complicated symmetries are the reason we have bound atomic nuclei and hence everything else!”

The Schwinger model’s electric charge conservation symmetry keeps the total amount of electric charge the same. That means that if the simulation of the model starts from the empty state, then an electron should always be accompanied by a positron when it pops into or out of existence. So by choosing a sequence of quantum gates that always maintains this rule, the researchers knew that any result that violated it must be an error from experimental imperfections. They could then throw out the obviously bad data—a process called post-selection. This helped them avoid corrupted data but required more runs than if the errors could have been prevented.

The team also explored a separate way to use the Schwinger model’s symmetries. There are orders of the simulation steps that might prove advantageous despite not obeying the model’s symmetry rules. So suppressing errors that result from orderings that don’t conform to the symmetry could prove useful. Earlier this year, Tran and colleagues at JQI showed(link is external) there is a way to cause certain errors, including ones from a symmetry defying order of steps, to interfere with each other and cancel out.

The researchers applied the proposed procedure in an experiment for the first time. They found that it did decrease errors that violated the symmetry rules. However, due to other errors in the experiment, the process didn’t generally improve the results and overall was not better than resorting to post-selection. The fact that this method didn’t work well for this experiment provided the team with insights into the errors occurring during their simulations.

All the tweaking and trial and error paid off. Thanks to the improvements the researchers made, including upgrading the hardware and implementing strategies like post-selection, they increased how much information they could get from the simulation before it was overwhelmed by errors. The experiment simulated the Schwinger model evolving for about three times longer than previous quantum simulations. This progress meant that instead of just seeing part of a cycle of particle creation and annihilation in the Schwinger model, they were able to observe multiple complete cycles.

“What is exciting about this experiment for me is how much it has pushed our quantum computer forward,” says Linke. “A computer is a generic machine—you can do basically anything on it. And this is true for a quantum computer; there are all these various applications. But this problem was so challenging, that it inspired us to do the best we can and upgrade our system and go in new directions. And this will help us in the future to do more.”

There is still a long road before the quantum computing race ends, and Davoudi isn’t betting on just digital simulations to deliver the quantum computing prize for nuclear physicists. She is also interested in analog simulations and hybrid simulations that combine digital and analog approaches. In analog simulations, researchers directly map parts of their model onto those of an experimental simulation. Analog quantum simulations generally require fewer computing resources than their digital counterparts. But implementing analog simulations often requires experimentalists to invest more effort in specialized preparation since they aren’t taking advantage of a set of standardized building blocks that has been preestablished for their quantum computer.

Moving forward, Davoudi and Linke are interested in further research on more efficient mappings onto the quantum computer and possibly testing simulations using a hybrid approach they have proposed(link is external). In this approach, they would replace a particularly challenging part of the digital mapping by using the phonons—quantum particles of sound—in Linke Lab’s computer as direct stand-ins for the photons—quantum particles of light—in the Schwinger model and other similar models in nuclear physics.

“Being able to see that the kind of theories and calculations that we do on paper are now being implemented in reality on a quantum computer is just so exciting,” says Davoudi. “I feel like I'm in a position that in a few decades, I can tell the next generations that I was so lucky to be able to do my calculations on the first generations of quantum computers. Five years ago, I could have not imagined this day.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/quantum-computers-are-starting-simulate-world-subatomic-particles

 

Bilayer Graphene Inspires Two-Universe Cosmological Model

Physicists sometimes come up with crazy stories that sound like science fiction. Some turn out to be true, like how the curvature of space and time described by Einstein was eventually borne out by astronomical measurements. Others linger on as mere possibilities or mathematical curiosities.

In a new paper in Physical Review Research(link is external)Victor Galitski and graduate student Alireza Parhizkar have explored the imaginative possibility that our reality is only one half of a pair of interacting worlds. Their mathematical model may provide a new perspective for looking at fundamental features of reality—including why our universe expands the way it does and how that relates to the most miniscule lengths allowed in quantum mechanics. These topics are crucial to understanding our universe and are part of one of the great mysteries of modern physics.

The pair of scientists stumbled upon this new perspective when they were looking into research on sheets of graphene—single atomic layers of carbon in a repeating hexagonal pattern. They realized that experiments on the electrical properties of stacked sheets of graphene produced results that looked like little universes and that the underlying phenomenon might generalize to other areas of physics. In stacks of graphene, new electrical behaviors arise from interactions between the individual sheets, so maybe unique physics could similarly emerge from interacting layers elsewhere—perhaps in cosmological theories about the entire universe.A curved and stretched sheet of graphene laying over another curved sheet creates a new pattern that impacts how electricity moves through the sheets. A new model suggests that similar physics might emerge if two adjacent universes are able to interact. (Credit: Alireza Parhizkar, JQI)A curved and stretched sheet of graphene laying over another curved sheet creates a new pattern that impacts how electricity moves through the sheets. A new model suggests that similar physics might emerge if two adjacent universes are able to interact. (Credit: Alireza Parhizkar, JQI)

“We think this is an exciting and ambitious idea,” says Galitski, who is also a Fellow of the Joint Quantum Institute (JQI). “In a sense, it's almost suspicious that it works so well by naturally ‘predicting’ fundamental features of our universe such as inflation and the Higgs particle as we described in a follow up preprint(link is external).”

Stacked graphene’s exceptional electrical properties and possible connection to our reality having a twin comes from the special physics produced by patterns called moiré patterns. Moiré patterns form when two repeating patterns—anything from the hexagons of atoms in graphene sheets to the grids of window screens—overlap and one of the layers is twisted, offset, or stretched.

The patterns that emerge can repeat over lengths that are vast compared to the underlying patterns. In graphene stacks, the new patterns change the physics that plays out in the sheets, notably the electrons' behaviors. In the special case called “magic angle graphene,” the moiré pattern repeats over a length that is about 52 times longer than the pattern length of the individual sheets, and the energy level that governs the behaviors of the electrons drops precipitously, allowing new behaviors, including superconductivity.

Galitski and Parhizkar realized that the physics in two sheets of graphene could be reinterpreted as the physics of two two-dimensional universes where electrons occasionally hop between universes. This inspired the pair to generalize the math to apply to universes made of any number of dimensions, including our own four-dimensional one, and to explore if similar phenomenon resulting from moiré patterns might pop up in other areas of physics. This started a line of inquiry that brought them face to face with one of the major problems in cosmology.

“We discussed if we can observe moiré physics when two real universes coalesce into one,” Parhizkar says. “What do you want to look for when you're asking this question? First you have to know the length scale of each universe.”

A length scale—or a scale of a physical value generally—describes what level of accuracy is relevant to whatever you are looking at. If you’re approximating the size of an atom, then a ten-billionth of a meter matters, but that scale is useless if you’re measuring a football field because it is on a different scale. Physics theories put fundamental limits on some of the smallest and largest scales that make sense in our equations.

The scale of the universe that concerned Galitski and Parhizkar is called the Planck length(link is external), and it defines the smallest length that is consistent with quantum physics. The Planck length is directly related to a constant—called the cosmological constant(link is external)—that is included in Einstein’s field equations of general relativity(link is external). In the equations, the constant influences whether the universe—outside of gravitational influences—tends to expand or contract.

This constant is fundamental to our universe. So to determine its value, scientists, in theory, just need to look at the universe, measure several details, like how fast galaxies are moving away from each other, plug everything into the equations and calculate what the constant must be.

This straightforward plan hits a problem because our universe contains both relativistic and quantum effects. The effect of quantum fluctuations across the vast vacuum of space should influence behaviors even at cosmological scales. But when scientists try to combine the relativistic understanding of the universe given to us by Einstein with theories about the quantum vacuum, they run into problems.

One of those problems is that whenever researchers attempt to use observations to approximate the cosmological constant, the value they calculate is much smaller than they would expect based on other parts of the theory. More importantly, the value jumps around dramatically depending on how much detail they include in the approximation instead of homing in on a consistent value. This lingering challenge is known as the cosmological constant problem, or sometimes the “vacuum catastrophe.”

“This is the largest—by far the largest—inconsistency between measurement and what we can predict by theory,” Parhizkar says. “It means that something is wrong.”

Since moiré patterns can produce dramatic differences in scales, moiré effects seemed like a natural lens to view the problem through. Galitski and Parhizkar created a mathematical model (which they call moiré gravity) by taking two copies of Einstein’s theory of how the universe changes over time and introducing extra terms in the math that let the two copies interact. Instead of looking at the scales of energy and length in graphene, they were looking at the cosmological constants and lengths in universes.

Galitski says that this idea arose spontaneously when they were working on a seemingly unrelated project that is funded by the John Templeton Foundation(link is external) and is focused on studying hydrodynamic flows in graphene and other materials to simulate astrophysical phenomena.

Playing with their model, they showed that two interacting worlds with large cosmological constants could override the expected behavior from the individual cosmological constants. The interactions produce behaviors governed by a shared effective cosmological constant that is much smaller than the individual constants. The calculation for the effective cosmological constant circumvents the problem researchers have with the value of their approximations jumping around because over time the influences from the two universes in the model cancel each other out.

“We don't claim—ever—that this solves cosmological constant problem,” Parhizkar says. “That's a very arrogant claim, to be honest. This is just a nice insight that if you have two universes with huge cosmological constants—like 120 orders of magnitude larger than what we observe—and if you combine them, there is still a chance that you can get a very small effective cosmological constant out of them.”

In preliminary follow up work(link is external), Galitski and Parhizkar have started to build upon this new perspective by diving into a more detailed model of a pair of interacting worlds—that they dub “bi-worlds.” Each of these worlds is a complete world on its own by our normal standards, and each is filled with matching sets of all matter and fields. Since the math allowed it, they also included fields that simultaneously lived in both worlds, which they dubbed “amphibian fields.”

The new model produced additional results the researchers find intriguing. As they put together the math, they found that part of the model looked like important fields that are part of reality. The more detailed model still suggests that two worlds could explain a small cosmological constant and provides details about how such a bi-world might imprint a distinct signature on the cosmic background radiation—the light that lingers from the earliest times in the universe.

This signature could possibly be seen—or definitively not be seen—in real world measurements. So future experiments could determine if this unique perspective inspired by graphene deserves more attention or is merely an interesting novelty in the physicists’ toy bin.

“We haven't explored all the effects—that's a hard thing to do, but the theory is falsifiable experimentally, which is a good thing,” Parhizkar says. “If it's not falsified, then it's very interesting because it solves the cosmological constant problem while describing many other important parts of physics. I personally don't have my hopes up for that— I think it is actually too big to be true.”

The research was supported by the Templeton Foundation and the Simons Foundation.

Original story by Bailey Bedford: https://jqi.umd.edu/news/bilayer-graphene-inspires-two-universe-cosmological-model?fbclid=IwAR2IS02vynZeBfnmX2tdgEr1TdLYb2OUN1E1vIXGUj1lDiLvbgFPl_LCzxs 

New Perspective Blends Quantum and Classical to Understand Quantum Rates of Change

There is nothing permanent except change. This is perhaps never truer than in the fickle and fluctuating world of quantum mechanics.

The quantum world is in constant flux. The properties of quantum particles flit between discrete, quantized states without any possibility of ever being found in an intermediate state. How quantum states change defies normal intuition and remains the topic of active debate—for both scientists and philosophers.

For instance, scientists can design a quantum experiment where they find a particle’s spin—a quantum property that behaves like a magnet—pointing either up or down. No matter how often they perform the experiment they never find the spin pointing in a direction in between. Quantum mechanics is good at describing the probability of finding one or the other state and describing the state as a mix of the two when not being observed, but what actually happens between observations is ambiguous.In the figure, a path winds through an abstract landscape of possible quantum states (gray sheet). At each point along the journey, a quantum measurement could yield many different outcomes (colorful distributions below the sheet). A new theory places strict limits on how quickly (and how slowly) the result of a quantum measurement can change over time depending on the various circumstances of the experiment. For instance, how precisely researchers initially know the value of a measurement affects how quickly the value can change—a less precise value (the wider distribution on the left) can change more quickly (represented by the longer arrow pointing away from its peak) than a more certain value (the narrower peak on the right). Credit: Schuyler NicholsonIn the figure, a path winds through an abstract landscape of possible quantum states (gray sheet). At each point along the journey, a quantum measurement could yield many different outcomes (colorful distributions below the sheet). A new theory places strict limits on how quickly (and how slowly) the result of a quantum measurement can change over time depending on the various circumstances of the experiment. For instance, how precisely researchers initially know the value of a measurement affects how quickly the value can change—a less precise value (the wider distribution on the left) can change more quickly (represented by the longer arrow pointing away from its peak) than a more certain value (the narrower peak on the right). Credit: Schuyler Nicholson

This ambiguity extends to looking at interacting quantum particles as a group and even to explaining how our everyday world can result from these microscopic quantum foundations. The rules governing things like billiards balls and the temperature of a gas look very different from the quantum rules governing things like electron collisions and the energy absorbed or released by a single atom. And there is no known sharp, defining line between these two radically different domains of physical laws. Quantum changes are foundational to our universe and understanding them is becoming increasingly important for practical applications of quantum technologies.

In a paper(link is external) published Feb. 28, 2022 in the journal Physical Review X, Adjunct Assistant Professor Alexey Gorshkov, Assistant Research Scientist Luis Pedro García-Pintos and their colleagues provide a new perspective for investigating quantum changes. They developed a mathematical description that sorts quantum behaviors in a system into two distinct parts. One piece of their description looks like the behavior of a quantum system that isn’t interacting with anything, and the second piece looks like the familiar behavior of a classical system. Using this perspective, the researchers identified limits on how quickly quantum systems can evolve based on their general features, and they better describe how those changes relate to changes in non-quantum situations.

“Large quantum systems cannot in general be simulated on classical computers,” says Gorshkov, who is a Fellow of the Joint Quantum Institute (JQI)  and the Joint Center for Quantum Information and Computer Science (QuICS). “Therefore, understanding something important about how these systems behave—such as our insights into the speed of quantum changes—is always exciting and bound to have applications in quantum technologies.”

There is a long history of researchers investigating quantum changes, with most of the research focused on transitions between quantum states. These states contain all the information about a given quantum system. But two distinct states can be as different as can be mathematically despite being extremely similar in practice. This means the state approach often offers a perspective that's too granular to generate useful experimental insights.

In this new research, the team instead focused on an approach that is more widely applicable in experiments. They didn’t focus on changes of quantum states themselves but rather on observables—the results of quantum measurements, which are what scientists and quantum computer users can actually observe. Observables can be any number of things, such as the momentum of a particle, the total magnetization of a collection of particles or the charge of a quantum battery(link is external) (a promising but still theoretical quantum technology). The researchers also chose to investigate quantum behaviors that are influenced by the outside world—a practical inevitability.

The team looked at general features of a possible quantum system, like how well known its energy is and how precisely the value they want to look at is known beforehand. They used these features to derive mathematical rules about how fast an observable can change for the given conditions.

“The spirit of the whole approach is not to go into the details of what the system may be,” says García-Pintos, who is also a QuICS postdoctoral researcher and is the lead author on the paper. “The approach is completely general. So once you have it, you can ask about a quantum battery, or anything you want, like how fast you're able to flip a qubit.”

This approach is possible because in quantum mechanics, two quantities can be intricately connected with strict mathematical rules about what you can know about them simultaneously (the most famous of these rules is the Heisenberg uncertainty principle for a quantum particle’s location and speed).

In addition to their new limits, they were able to reverse the process to show how to make a system that achieves a desired change quickly.

These new results build upon a previous work(link is external) from García-Pintos and colleagues. They studied classical changes such as how quickly energy and entropy can be exchanged between non-quantum systems. This previous result allowed the researchers to break up different behaviors into quantum-like and non-quantum-like descriptions. With this approach, they have a single theory that spans the extremes of possible outside influence—from enough interaction to allow no quantum behavior to the purely theoretical realms of quantum situations without any external influence.

“It's nice; it's elegant that we have this framework where you can include both of these extremes,” García-Pintos says. “One interesting thing is that when you combine these two bounds, we get something that is tighter, meaning better than the established bound.”

Having the two terms also allowed the researchers to describe the slowest speed at which a particular observable will change based on the details of the relevant situation. In essence, to find the slowest possible change they look at what happens when the two types of effects are completely working against each other. This is the first time that a lower bound has been put on observables in this way.

In the future, these results might provide insights into how to best design quantum computer programs or serve as a starting point for creating even more stringent limits on how quickly specific quantum situations can change.

Original story by Bailey Bedford: https://jqi.umd.edu/news/new-perspective-blends-quantum-and-classical-understand-quantum-rates-change

In addition to Gorshkov and García-Pintos, authors on the paper include Schuyler Nicholson, a postdoctoral fellow at Northwestern University; Jason R. Green, a professor of chemistry at the University of Massachusetts Boston; and Adolfo del Campo, a professor of physics at the University of Luxembourg.