Superconductivity’s Halo: Physicists Map Rare High-field Phase

 A puzzling form of superconductivity that arises only under strong magnetic fields has been mapped and explained by a research team of UMD, NIST and Rice University including  professor of physics and astronomy at Rice University. Their findings,  published in Science July 31, detail how uranium ditelluride (UTe2) develops a superconducting halo under strong magnetic fields. 

Traditionally, scientists have regarded magnetic fields as detrimental to superconductors. Even moderate magnetic fields typically weaken superconductivity, while stronger ones can destroy it beyond a known critical threshold. However, UTe2 challenged these expectations when, in 2019, it was discovered to maintain superconductivity in critical fields hundreds of times stronger than those found in conventional materials. Image by Sylvia Klare Lewin, Nicholas P Butch/ NIST & UMDImage by Sylvia Klare Lewin, Nicholas P Butch/ NIST & UMD

“When I first saw the experimental data, I was stunned,” said Andriy Nevidomskyy, a member of the Rice Advanced Materials Institute and the Rice Center for Quantum Materials. “The superconductivity was first suppressed by the magnetic field as expected but then reemerged in higher fields and only for what appeared to be a narrow field direction. There was no immediate explanation for this puzzling behavior." 

Superconducting resurrection in high fields

This phenomenon, initially identified by researchers at the University of Maryland Quantum Materials Center and the National Institute of Standards and Technology (NIST), has captivated physicists worldwide. In UTe2, superconductivity vanished below 10 Tesla, a field strength that is already immense by conventional standards, but surprisingly reemerged at field strengths exceeding 40 Tesla. 

This unexpected revival has been dubbed the Lazarus phase. Researchers determined that this phase critically depends on the angle of the applied magnetic field in relation to the crystal structure. 

In collaboration with experimental colleagues at UMD and NIST, Nevidomskyy decided to map out the angular dependence of this high-field superconducting state. Their precise measurements revealed that the phase formed a toroidal, or doughnutlike, halo surrounding a specific crystalline axis. 

“Our measurements revealed a three-dimensional superconducting halo that wraps around the hard b-axis of the crystal,” said Sylvia Lewin of NIST, a co-lead author on the study. “This was a surprising and beautiful result.”

Building theory to fit halo

To explain these findings, Nevidomskyy developed a theoretical model that accounted for the data without relying heavily on debated microscopic mechanisms. His approach employed an effective phenomenological framework with minimal assumptions about the underlying pairing forces that bind electrons into Cooper pairs. 

The model successfully reproduced the nonmonotonic angular dependence observed in experiments, offering insights into how the orientation of the magnetic field influences superconductivity in UTe2. 

Deeper understanding of interplay

The research team found that the theory, fitted with a few key parameters, aligned remarkably well with the experimental features, particularly the halo’s angular profile. A key insight from the model is that Cooper pairs carry intrinsic angular momentum like a spinning top does in classical physics. The magnetic field interacts with this momentum, creating a directional dependence that matches the observed halo pattern. 

This work lays the foundation for a deeper understanding of the interplay between magnetism and superconductivity in materials with strong crystal anisotropy like UTe2. 

“One of the experimental observations is the sudden increase in the sample magnetization, what we call a metamagnetic transition,” said NIST’s Peter Czajka, co-lead author on the study. “The high-field superconductivity only appears once the field magnitude has reached this value, itself highly angle-dependent.” 

The exact origin of this metamagnetic transition and its effect on superconductivity is hotly debated by scientists, and Nevidomskyy said he hopes this theory would help elucidate it. 

“While the nature of the pairing glue in this material remains to be understood, knowing that the Cooper pairs carry a magnetic moment is a key outcome of this study and should help guide future investigations,” he said.

Co-authors of this study include Corey Frank and Nicholas Butch from NIST; Hyeok Yoon, Yun Suk Eo, Johnpierre Paglione and Gicela Saucedo Salas from UMD; and G. Timothy Noe and John Singleton from the Los Alamos National Laboratory. This research was supported by the U.S. Department of Energy and the National Science Foundation.

 Original article: https://news.rice.edu/news/2025/superconductivitys-halo-rice-theoretical-physicist-helps-map-rare-high-field-phase

New Protocol Demonstrates and Verifies Quantum Speedups in a Jiffy

While breakthrough results over the past few years have garnered headlines proclaiming the dawn of quantum supremacy, they have also masked a nagging problem that researchers have been staring at for decades: Demonstrating the advantages of a quantum computer is only half the battle; verifying that it has produced the right answer is just as important.

Now, researchers at JQI and the University of Maryland (UMD) have discovered a new way to quickly check the work of a quantum computer. They proposed a novel method to both demonstrate a quantum device’s problem-solving power and verify that it didn’t make a mistake. They described their protocol in an article published March 5, 2025, in the journal PRX Quantum.

“Perhaps the main reason most of us are so excited about studying large interacting quantum systems in general and quantum computers in particular is that these systems cannot be simulated classically,” says JQI Fellow Alexey Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS), a senior investigator at the National Science Foundation Quantum Leap Challenge Institute for Robust Quantum Simulation (RQS) and a physicist at the National Institute of Standards and Technology. “Coming up with ways to check that these systems are behaving correctly without being able to simulate them is a fun and challenging problem.”Researchers have proposed a new way to both demonstrate and verify that quantum devices offer real speedups over ordinary computers. Their protocol might be suitable for near-term devices made from trapped ions or superconducting circuits, like the one shown above. (Credit: Kollár Lab/JQI)Researchers have proposed a new way to both demonstrate and verify that quantum devices offer real speedups over ordinary computers. Their protocol might be suitable for near-term devices made from trapped ions or superconducting circuits, like the one shown above. (Credit: Kollár Lab/JQI)

In December 2024, Google announced its newest quantum chip, called Willow, accompanied by a claim that it had performed a calculation in five minutes that would have taken the fastest supercomputers 10 septillion years. That disparity suggested a strong demonstration of a quantum advantage and hinted at blazing fast proof that quantum computers offer exponential speedups over devices lacking that quantum je ne sais quoi.

But the problem that the Willow chip solved—a benchmark called random circuit sampling that involves running a random quantum computation and generating many samples of the output—is known to be hard to verify without a quantum computer. (Hardness in this context means that it would take a long time to compute the verification.) The Google team verified the solutions produced by their chip for small problems (problems with just a handful of qubits) using an ordinary computer, but they couldn’t come close to verifying the results of the 106-qubit problem that generated the headlines.

Fortunately, researchers have also discovered easy-to-verify problems that can nevertheless demonstrate quantum speedups. Such problems are hard for a classical (i.e., non-quantum) computer but easy for a quantum computer, which makes them prime candidates for showing off quantum prowess. Crucially, these problems also allow a classical computer to quickly check the work of the quantum device.

Even so, not every problem with these features is practical for the quantum computers that exist right now or that will exist in the near future. In their new paper, the authors combined two key earlier results to construct a novel protocol that is more suitable for demonstrating and verifying the power of soon-to-be-built quantum devices.

One of the earlier results identified a suitable problem with the right balance of being difficult to solve but easy to verify. Solving that problem amounts to preparing the lowest energy state of a simple quantum system, measuring it, and reporting the outcomes. The second earlier result described a generic method for verifying a quantum computation after it has been performed—a departure from standard methods that require a live back-and-forth while the computation is running. Together, the two results combined to significantly cut down the number of repetitions needed for verification, from an amount that grows as the square of the number of qubits down to a constant amount that doesn’t grow at all.

“We combined them together and, somewhat unexpectedly, this also reduced the sample complexity to a really low level,” says Zhenning Liu, the lead author of the new paper and a graduate student at QuICS.

The resulting protocol can run on any sufficiently powerful quantum computer, but its most natural implementation is on a particular kind of device called an analog quantum simulator.

Generally, quantum computers, which process information held by qubits, fall into two categories. There are digital quantum computers, like Google’s Willow chip, that run sequences of quantum instructions and manipulate qubits with discrete operations, similar to what ordinary digital computers do to bits. And then there are analog quantum computers that initialize qubits and let them evolve continuously. An analog quantum simulator is a special-purpose analog quantum computer. 

Liu and his colleagues—inspired by the kinds of quantum devices that are already available and driven by one of the primary research goals of RQS—focused on demonstrating and verifying quantum advantage on a subset of analog quantum simulators.

In particular, their protocol is tailored to analog quantum simulators capable of hosting simple nearest-neighbor interactions between qubits and making quantum measurements of individual qubits. These capabilities are standard fare for many kinds of experimental qubits built out of trapped ions or superconductors, but the researchers required one more ingredient that might be harder to engineer: an interaction between one special qubit—called the clock qubit—and all of the other qubits in the device.

“Quantum simulators will only be useful if we can be confident about their results,” says QuICS Fellow Andrew Childs, who is also the director of RQS and a professor of computer science at UMD. “We wanted to understand how to do this with the kind of simulators that can be built today. It's a hard problem that has been a lot of fun to work on.”

Assuming an analog quantum simulator with all these capabilities could be built, the researchers described a protocol to efficiently verify its operation by following a classic two-party tale in computer science. One party, the prover, wants to convince the world that their quantum device is the real deal. A second party, the verifier, is a diehard skeptic without a quantum computer who wants to challenge the prover and ascertain whether they are telling the truth.

In the future, a practical example of this kind of interaction might be a customer accessing a quantum computer in a data center that can only be reached via the cloud. In that setting, customers might want a way to check that they are really using a quantum device and aren’t being scammed. Alternatively, the authors say the protocol could be useful to scientists who want to verify that they’ve really built a quantum simulator in their lab. In that case, the device would be under the control of a researcher doing double duty as both verifier and prover, and they could ultimately prove to themselves and their colleagues that they’ve got a working quantum computer.

In either case, the protocol goes something like this. First, the verifier describes a specific instance of the problem and an initial state. Then, they ask the prover to use that description to prepare a fixed number of final states. The correct final state is unknown to the verifier, but it is closely related to the original problem of finding the lowest energy state of a simple quantum system. The verifier also chooses how certain they want to be about whether the prover has a truly quantum device, and they can guarantee a desired level of certainty by adjusting the number of final states that they ask the prover to prepare.

For each requested state, the verifier flips a coin. If it comes up heads, the verifier’s goal is to collect a valid solution to the problem, and they ask the prover to measure all the qubits and report the results. Based on the measurement of the special clock qubit, the verifier either throws the results away or stores them for later. Measuring the clock qubit essentially lets the verifier weed out invalid results. The results that get stored are potentially valid solutions, which the verifier will publish at the end of the protocol if the prover passes the rest of the verification.

If the coin comes up tails, the verifier’s goal is to test that the prover is running the simulation correctly. To do this, the verifier flips a second coin. If that coin comes up heads, the verifier asks the prover to make measurements that check whether the input state is correct. If the coin comes up tails, the verifier asks the prover to make measurements that reveal whether the prover performed the correct continuous evolution. 

The verifier then uses all the results stemming from that second coin flip to compute two numbers. In the paper, the team calculated thresholds for each number that separate fraudulent provers from those with real quantum-powered devices. If the two numbers clear those thresholds, the verifier can publish the stored answers, confident in the fact that the prover is telling the truth about their quantum machine.

There is a caveat to the protocol that limits its future use by a suspicious customer of a quantum computing cloud provider. The protocol assumes that the prover is honest about which measurements they make—it assumes that they aren’t trying to pull one over on the verifier and that they make the measurements that the verifier requests. The authors describe a second version of the protocol that parallels the first and relaxes this element of trust. In that version, the prover doesn't measure the final states but instead transmits them directly to the verifier as quantum states—a potentially challenging technical feat. With the states under their control, the verifier can flip the coins and make the measurements all on their own. This is why the protocol can still be useful for researchers trying to put their own device through its paces and demonstrate near-term quantum speedups in their labs.

Ultimately the team would love to relax the requirement that the prover is trusted to make the right measurements. But progress toward this more desirable feature has been tough to find, especially in the realm of quantum simulation.

“That's a really hard problem,” Liu says. “This is very, very nontrivial work, and, as far as I know, all work that has this feature relies on some serious cryptography. This is clearly not easy to do in quantum simulations.”

Original story by Chris Cesare: https://jqi.umd.edu/news/new-protocol-demonstrates-and-verifies-quantum-speedups-jiffy

In addition to Gorshkov, Zhenning Liu, and Childs the paper had several other authors: Dhruv Devulapalli, a graduate student in physics at UMD; Dominik Hangleiter, a former QuICS Hartree Postdoctoral Fellow who is now a Quantum Postdoctoral Fellow at the Simons Institute for the Theory of Computing at the University of California, Berkeley; Yi-Kai Liu, who is a QuICS Fellow and a senior investigator at RQS; and JQI Fellow Alicia Kollár, who is also a senior investigator at RQS.

 

Work on 2D Magnets Featured in Nature Physics Journal

University of Maryland Professor Cheng Gong (ECE), along with his postdocs Dr. Ti Xie, Dr. Jierui Liang and collaborators in Georgetown University (Professor Kai Liu group), UC Berkeley (Professor Ziqiang Qiu), University of Tennessee, Knoxville (Professor David Mandrus group) and UMD Physics (Professor Victor M. Yakovenko), have made a new discovery on controlling the magnetic domain behaviors in two-dimensional (2D) quantum magnet, with a paper published in 2025 July issue of Nature Physics. Titled “High-efficiency optical training of itinerant two-dimensional magnets”, the work developed a new approach to using ultralow-power optical incidence to control the size and spin orientations of the formed magnetic domains. Prof. Victor Yakovenko, Dr. Ti Xie, and Prof. Cheng Gong. Photo credit: Shanchuan Liang and Dhanu ChettriProf. Victor Yakovenko, Dr. Ti Xie, and Prof. Cheng Gong. Photo credit: Shanchuan Liang and Dhanu Chettri

Generally, nature likes to evolve towards lower energy for the sake of stability. For example, water flows from mountains down to valleys. However, we often see that water puddles are trapped on the hillside, instead of sliding all the way down to the valleys due to the physical barriers that prevent the stream’s continuous drop. In a nutshell, even though a physical system tends to develop itself into the lowest energy state (i.e., ground state), it can be trapped at many local energy minima (i.e., metastable states). Controlling the kinetic process can guide a system into numerous previously unexplored metastable configurations.

In the recent Nature Physics article, Gong’s team sheds light on 2D magnets to control their magnetic phase transition kinetics, easily weaving a plethora of distinct metastable spin textures onto the atomically thin magnetic flatlands. “The stereotype notion is that a material’s properties are set once its atomic composition and structure are set,” Gong explained, “this is not always the case. The electron spins can arrange themselves in distinct spatial patterns on top of an atomic lattice. Each spin pattern corresponds to the series of associated physical properties magnetically, electrically, optically, and even thermally. This means that one can create numerous quantum materials by magnetic dressing, without the need of changing the material’s compositional skeleton at all."

“The idea is out of the box, yet easily understandable.” Gong further introduced their design, “we implant optically excited spin polarized electrons as tiny magnetic seeds throughout the 2D magnet, by shining a circularly polarized light during the cooling process. When a large-size 2D magnet flake is cooled down across its magnetic phase transition temperature, the electron spins will be aligned to form many domains of either up or down orientation, usually with 50% by 50% populations. However, with the help of magnetic seeds, all the spins nearby can be aligned towards the same orientation following the seeds, resulting in enlarged domain size or even single magnetic domain across the whole material. The orientation of the single magnetic domain can be dictated by the handedness of the circular light”. Their research article includes details on using optical helicity and ultralow optical power density (approximately 20 microwatts per micrometer square) to control the size and orientation of the formed domains. “Well, clearly, this is a non-chemical, reconfigurable method to create artificial quantum materials with arbitrarily designed spin textures, with hopefully on-demand properties,” Gong added.

“The work of the Gong group developed the innovative, non-synthetic method to create artificial quantum magnets by magnetically dressing 2D materials with designed spin textures, potentially reshaping the landscape of quantum materials. This advance is a valuable contribution to the ongoing Quantum Information Science initiatives in the U.S.,” remarked UMD Professor and Quantum Technology Center (QTC) Founding Director Ronald Walsworth.

The novel strategy of optical training of 2D magnets may lead to energy-efficient technology innovations at large. Don Woodbury, Director of Innovation and Partnerships, Clark School of Engineering at University of Maryland, said “The technology developed in the Gong group represent state-of-the-art innovations in 2D spintronic and opto-spintronic devices in ultracompact footprint, with wide implications in integrated nanoelectronics, nanophotonics and magnetoelectric sensors that could find use in both defense and civilian domains.”

Professor Sennur Ulukus, Chair of Department of Electrical and Computer Engineering, University of Maryland, summarized, “The original research led by Professor Gong lies at the intersection of quantum materials and spintronic devices, resonating with the U.S. Quantum Information Science legislation and CHIPS and Science Act. Gong’s sustained high-profile research achievements featured by prestigious journals are successful testimonies of UMD’s quantum and microelectronic workforce.” 

The research work published in this Nature Physics article is primarily supported by the grants from the Air Force Office of Scientific Research under award no. FA9550-22-1-0349 and National Science Foundation under award nos. DMR-2340773, FuSe-2425599, DMR-2326944, ECCS-2429994, DMR-2005108 and ECCS-2429995.

 Original story: https://ece.umd.edu/news/story/discovery-led-by-professor-cheng-gong-featured-in-nature-physics-journal

NASA’s Parker Solar Probe Reveals a Key Particle Accelerator Near the Sun

Flying closer to the sun than any spacecraft before it, NASA’s Parker Solar Probe uncovered a new source of energetic particles near Earth’s star, according to a new study co-authored by University of Maryland researchers. 

Published in The Astrophysical Journal Letters on May 29, 2025, the paper suggests that a process linked to magnetic reconnection—the explosive merging and realigning of magnetic field lines—could propel particles near the sun to extremely high energy. The data sheds light on processes that were impossible to observe in such a harsh environment before Parker launched in 2018, according to study co-author and University of Maryland researcher James Drake.As NASA’s Parker Solar Probe (trajectory shown in green) crossed the heliospheric current sheet, it encountered merging magnetic islands (areas shown in blue) and protons accelerated toward the sun, establishing reconnection as their source. Image credit: JHUAPL. As NASA’s Parker Solar Probe (trajectory shown in green) crossed the heliospheric current sheet, it encountered merging magnetic islands (areas shown in blue) and protons accelerated toward the sun, establishing reconnection as their source. Image credit: JHUAPL.

“We now, for the first time, have a spacecraft that is going through an enormous magnetic reconnection event and can directly measure everything, and that's simply never happened before,” said Drake, a Distinguished University Professor in UMD’s Department of Physics and Institute for Physical Science and Technology (IPST).

Study co-author and Parker Solar Probe project scientist Nour Rawafi, who is also a heliophysicist at the Johns Hopkins Applied Physics Laboratory, added that Parker is enabling researchers to see unexplored regions of the sun.

“Parker Solar Probe was designed to solve some of the sun’s biggest mysteries and uncover hidden processes we couldn’t detect from afar,” Rawafi said, “and this discovery hits right at the heart of that mission.”

Drake and Marc Swisdak, a research scientist in UMD’s Institute for Research in Electronics & Applied Physics (IREAP), were tapped to help analyze Parker Solar Probe data because of their expertise in magnetic reconnection. The two UMD researchers previously identified the mechanism driving the sun’s fast wind and have now interpreted data from a massive magnetic reconnection event measuring four times the size of the sun, according to Drake. 

This data was collected during Parker’s fourteenth swing by the sun in December 2022, when the probe crossed the heliospheric current sheet (HCS), an undulating structure invisible to human eyes. Like a twirling flamenco skirt, the sheet separates regions where the sun’s magnetic field points in opposite directions.

Ripples in the current sheet cause the magnetic fields to merge and rearrange through magnetic reconnection. This releases energy explosively, catapulting a jet of charged particles away as an “exhaust” of energized particles. That same phenomenon affects the Earth-space environment, creating auroral shows at Earth’s poles and geomagnetic storms capable of disrupting satellite communications and causing blackouts.

For nearly four hours in late 2022, Parker passed through the exhaust generated by these reconnection events in the HCS. There, it encountered protons being accelerated, unexpectedly, toward the sun—quashing any doubt over where this energy came from.

“These findings indicate that magnetic reconnection in the HCS is an important source of energetic particles in the near-sun solar wind,” said the study’s lead author Mihir Desai, a solar physicist at the Southwest Research Institute.

Some of the protons that Parker measured had nearly 1,000 times more energy than what could have been transferred by the available magnetic energy. To help pinpoint the mechanism for this surprising energy gain, Drake, Swisdak and IREAP Faculty Assistant Zhiyu Yin (Ph.D. ’24, physics) ran a simulation using a computational model that had been in development for several years at UMD. This study marks the first time their model has been used to directly simulate an observable event. 

“From that simulation, we calculated the spectrum of energetic particles and then compared that with what was seen in the Parker data, and we were able to get a pretty good match,” Drake explained.

Their simulations also confirmed earlier studies, including a 2006 paper co-authored by Drake and Swisdak, which identified “magnetic islands”—loops of the magnetic field that pinch off like water droplets when field lines merge—as the source of this extra energy boost. Particles trapped within the loops get an additional kick as the islands merge and shed their own energy, accelerating some particles nearly to the speed of light. 

“The mechanisms we saw in this study seem consistent with what we have been working on for nearly 20 years, but what surprised me is that these particles gain so much energy,” Drake said. “One important thing about this set of observations is that it demonstrates that magnetic energy can get focused into a small number of extremely energetic particles.”

In addition to demystifying energy exchanges near the sun, learning more about magnetic reconnection—and any resulting solar flares—can help astronauts stay safe.

“These energetic particles are a threat to astronauts if they're out in space,” Drake said. “In a solar flare, you can get some dangerous particles that reach extremely high energies.”

As researchers continue to explore these problems through Parker Solar Probe data, Drake hopes that future observations will chart the spectra of electrons in magnetic reconnection events—a missing piece of the puzzle.

“Our simulations show that the electrons have a lot of energy, but the data we published in this paper don't show the electron spectrum at all,” Drake explained. “One of the important questions is, ‘What carries more energy: the protons or the electrons undergoing acceleration?’ That's one important aspect that we would really like to follow up on.”

###

This article was adapted from text provided by the Johns Hopkins Applied Physics Lab and the Southwest Research Institute. 

Their paper, “Magnetic Reconnection–driven Energization of Protons up to ∼400 keV at the Near-Sun Heliospheric Current Sheet,” was published May 29, 2025, in The Astrophysical Journal Letters.

This research was supported by NASA's Parker Solar Probe Mission (Contract No. NNN06AA01C), NASA grants (Nos. 80NSSC20K1815, 80NSSC18K1446, 80NSSC21K0112, 80NSSC20K1255, 80NSSC21K0971 and 80NSSC21K1765), the U.S. National Science Foundation (Grant No. PHY2109083) and Princeton University. This article does not necessarily reflect the views of these organizations.

 

Time Crystal Research Enters a New Phase

Our world only exists thanks to the diverse properties of the many materials that make it up. The differences between all those materials result from more than just which atoms and molecules form them. A material’s properties also depend on how those basic building blocks are organized in space. For instance, the only difference between a hard diamond and the flaky graphite in pencils is the pattern that carbon atoms form in the material.

Studying the repeating structures of materials has been instrumental to the field of materials science for the past century. But since 2012, researchers have branched out and started investigating a new type of material in which the basic building blocks order themselves into a stable structure that repeats in time.The pink and green sheets of arrows represent a 2D material in two different states. As time progresses from left to right, the material oscillates between each state, forming a time crystal. The states flip back and forth at half the rate of the force driving them, which is represented by the yellow wave. (Credit: Stuart Yi-Thomas, UMD)The pink and green sheets of arrows represent a 2D material in two different states. As time progresses from left to right, the material oscillates between each state, forming a time crystal. The states flip back and forth at half the rate of the force driving them, which is represented by the yellow wave. (Credit: Stuart Yi-Thomas, UMD)

These special materials are called “time crystals.” (Whether or not a structure makes a pretty jewel, physicists call all solid, orderly structures of repeating atoms or molecules crystals.) Time crystals are collections of particles that undergo repetitive patterns in time, and they can only exist when there is an external force supplying energy to them. But not everything driven into a periodic pattern is a time crystal.

To be a time crystal, the repeating pattern must arise from the interactions of the constituent particles and not just mirror the periodic pattern of the driving force. Additionally, a time crystal’s structure provides stability so that it can maintain its pattern even when the driving force temporarily falters or the time crystal is nudged by another force. (So the pistons of a car engine making the car’s wheels rotate in unison as it drives down the road isn’t an example of a time crystal.)

Researchers have been able to create time crystals in the lab and describe their observed behaviors. However, descriptions of time crystals have generally focused on a particular experimental result and haven’t delved into the theory of what fundamentally makes them form. The lack of a robust theoretical framework leaves many open questions in the field and gives researchers little guidance on which ingredients are useful for creating new time crystals.

In an article published late last year in the journal Physical Review Letters, JQI graduate student Stuart Yi-Thomas and Professor and JQI co-Director Jay Sau presented a new framework for studying time crystals formed from specific ingredients. They made their framework by adapting a widely used and versatile theory for describing phases of matter in quantum systems.

To a physicist, phases of matter refer to more than just solids, liquids and gases and include other organizations of matter such as plasmas, magnets and superfluids. Phases represent distinctive states that a material can be in. When a condition like the temperature, pressure or magnetic field strength varies, a material can switch between phases—undergo a phase transition—and suddenly behave dramatically differently, like water freezing into ice or aluminum becoming superconducting when cooled in liquid helium.

“Time crystals constitute a new phase of matter, which has garnered a lot of excitement in the past decade or so,” says Yi-Thomas who is also a graduate student of the Condensed Matter Theory Center (CMTC). “And heretofore, it has not really been understood as other phases of matter have.”

The framework that Sau and Yi-Thomas crafted allows researchers to study time crystals more like traditional phases and provides insights into when time crystals will form and when they will fall apart.

Some researchers hope that as time crystals become better understood their stability will be put to work as memory in quantum computers.

“The way we use materials and technology is mostly centered around the phases they are in,” says Sau, who is also a member of CMTC. “Solids, liquids, gases, superconductors, metals, insulators and magnets—these are all examples of phases of matter, and their properties are what we use to build technology. So the hope is that once we have other, newer phases we should be able to find technological applications for them.”

To study time crystals as a phase of matter, Sau and Yi-Thomas chose not to focus on an individual time crystal experiment. Instead, they stayed very general and picked out a few well-understood ingredients they thought were promising candidates for forming a time crystal.

The critical ingredient they needed to identify was a type of basic building block that could come together and produce the pattern. The pair focused their investigation on building blocks that all interact with each other and that each is what physicists call an “oscillator,” which means it behaves like a spring or pendulum that can bounce or swing between different states. But Sau and Yi-Thomas didn’t think just any oscillator would work; they wanted a particular type of oscillator—called a non-linear oscillator—that changes how strongly it responds when pushed far enough away from its resting state. There are many types of nonlinear oscillators from car shock springs to microscopic carbon nanotubes to electrical effects in circuits, and Sau and Yi-Thomas didn’t want to specify what sort of non-linear oscillator if they didn’t have to.

Another crucial ingredient to select was an energy input to drive the oscillators back and forth between states. The pair focused on an energy source that would create a phenomenon called “parametric amplification” in a collection of oscillators. Parametric amplification produces oscillating behaviors that reminded them of the repetition of a time crystal, but it comes with an amplification of the oscillations as energy is fed in over time. To get parametric amplification, the frequency of the driving must be selected based on the properties of the oscillators being used.

Then they needed an ingredient to balance the steady increase in energy. So they identified a third simple—but likely necessary—ingredient: friction or some other interaction that can bleed off energy and let the pieces settle into a stable pattern.

Together, these three requirements translate into the constituents of the time crystal needing to be what physicists categorize as “weakly-nonlinear parametrically-driven dissipative coupled oscillators.”

“It sounds very specific because there's a lot of qualifiers, but for this specific model, it's kind of the simplest model we could do,” Yi-Thomas says. “And we expect that similar results would apply to a wide variety of systems.”

Despite all the demands, several common experimental setups, including certain laser setups and specialized electrical circuits, can check all the identified boxes and thus provide a suitable place to try to make or model new time crystals.

Without worrying about which experimental setup might be used, Sau and Yi-Thomas needed a way to analyze if there are conditions under which their ingredients produce a time crystal. Since they wanted to put time crystals on a similar footing as more traditional phases, they turned to a classic tool for studying phases of matter in quantum systems. That tool, called the Ginzburg-Landau theory, was created to describe superconductivity and the phase transition associated with a material becoming or ceasing to be a superconductor. Over time, physicists have used the same basic math to describe other phases like magnets and Bose-Einstein condensates as well.

The pair adapted the existing theory’s descriptions of phase transitions to apply to the traits they had identified. The resulting framework suggested that experiments with the identified ingredients should be able to form a time crystal.

“To use an idiom, we show that you can build a time crystal with things you can find around the house,” Yi-Thomas says. “You don't need a complex cellular automaton. You don't need many-body localization, or these exotic things. Just with these ubiquitous elements, you can still create a symmetry-breaking phase—a time crystal.”

The paper didn’t argue that using these ingredients is the only way to produce a time crystal but instead highlighted them as promising (and convenient) candidates for researchers to consider.

In the paper, Sau and Yi-Thomas discussed what their new framework revealed about the conditions under which a time crystal should form or break down. They found that the oscillators need to respond with a certain amount of randomness—noise. Just the right amount of noise helps a time crystal lose energy and achieve stability. If there isn’t enough randomness, the steady input of energy tends to introduce too much chaos and make the pieces shift around unpredictably. If instead there is too much randomness in the noise, the randomness itself prevents a stable pattern from emerging. The pair’s calculations suggest a range of possible noise levels that supply an appropriate amount of randomness to form a time crystal.

The pair also tackled the question of whether the stability of a time crystal improves with the number of particles in it, similar to the way normal phases are more stable in larger systems. For instance, the larger a magnet is, the more improbable it will demagnetize or reverse its polarity.

Time crystals might also have an intrinsic stability that increases with size, but many researchers suspect that as experiments increase in size larger time crystals will continue to quickly fall out of their coordinated dances. So Sau and Yi-Thomas looked at larger and larger models of time crystals in their framework to predict whether the crucial stability is doomed to fail or might actually be robust. Their calculations predicted that as time crystals grow, they will experience an increase in stability, similar to magnets and other phases.

Now the framework is available as a tool for researchers investigating a diverse pool of time crystal experiments. Since the theory lays out specific traits that it predicts can create a time crystal, it provides a guide for selecting experiments and conditions that might be fruitful for researchers searching for new time crystals. It also provides a way to predict and study both the conditions under which time crystals fail and when they might experience a shift to a slightly new pattern in time.

Sau and Yi-Thomas hope that other researchers will apply their framework to new time crystal experiments and that research on the time crystal phase of matter will eventually be as robust as research into other phases.

Original story by Bailey Bedford: https://jqi.umd.edu/news/time-crystal-research-enters-new-phase