Barkeshli Selected for Prestigious Simons Collaboration to Study Inner Workings of Artificial Intelligence

As artificial intelligence (AI) rapidly transforms everything from medicine to scientific research to creative fields, a fundamental question remains unanswered: How do AI systems actually work?  

AI models help diagnose diseases, discover new drugs, write computer code and generate images, yet scientists still don't fully understand the principles underlying their remarkable capabilities. Solving this ‘black box’ problem—where we can see AI's outputs but can’t fully comprehend its internal workings—has become more urgent as these systems become more deeply embedded in society.

University of Maryland Physics Professor Maissam Barkeshli will help unravel that mystery. 

Barkeshli was one of 17 principal investigators recently chosen for the Simons Collaboration on the Physics of Learning and Neural Computation, an international research initiative that aims to investigate the complex inner workings of AI. The collaboration, which will receive $2 million annually for the next four years, brings together leading experts from physics, mathematics, computer science and neuroscience. The team will first identify key emerging phenomena in AI before isolating them and studying them systematically, forming smaller working groups to tackle specific questions, then combining their findings at the conclusion of the collaboration.

“Maissam exemplifies the intellectual agility we prize in our faculty,” said UMD Physics Chair and Professor Steven Rolston. “Originally hired for his work in condensed matter theory, he is pivoting to address the exciting and potentially impactful challenge of understanding why artificial intelligence models actually work, informed by the concepts of mathematical and statistical physics.”

At UMD, Barkeshli's primary research focuses on quantum many-body phenomena. He studies how collections of many particles like electrons in materials spontaneously organize into unusual or specific positions such as superconductors and quantum Hall systems. Such events are emergent phenomena, which occur when simple components interact to create behaviors that cannot be predicted from studying individual parts alone. 

“Fundamentally, the field is really about emergence,” Barkeshli noted. “It’s about understanding collective behavior that is qualitatively different when you go to different scales that you wouldn’t have seen at smaller scales.” 

For Barkeshli, intelligence and learning are forms of emergent phenomena as well. Just as billions of electrons can collectively create superconductivity, neural networks with billions of parameters somehow learn to reason and understand language. As he begins his collaboration with experts from multiple disciplines, Barkeshli believes the theoretical tools and perspectives that physicists have developed in understanding the natural world can help us understand how AI works as well.

“There are three core ingredients of AI systems that interact to produce intelligence: training data, neural network architecture and optimization algorithms that are used to train models,” Barkeshli explained. “There’s incredibly rich interaction between these ingredients, but they all act very differently between themselves and have their own peculiarities at the individual level. We don’t have a very good idea of how it all comes together or why it works so well.”

These interactions lead to even more mysteries. For example, Barkeshli noted that AI follows predictable “scaling laws.”

“As you increase the size of data, the network and the computing power spent on training, AI systems get better and better,” he said. “In some cases, they follow very predefined, almost law-like patterns, where they’re getting better in a very predictable way. This is an emergent phenomenon that isn’t understood very well that we hope to study.”

Current AI development relies heavily on trial and error, but Barkeshli’s work on emergent phenomena may be the key to answering fundamental questions—such as why the human brain can operate on about 20 watts of power, yet AI systems require much more energy to complete similar cognitive tasks. 

“People have been trying different ideas based on intuition, but a more systematic understanding of AI could unlock some useful capabilities, like bridging that efficiency gap between human brains and language models,” he explained. 

Although the Simons Collaboration will focus on the most fundamental aspects of AI systems, Barkeshli hopes that “peeking under the hood” will illuminate more profound applications for AI in everyday life. 

“There’s room for making immense improvements,” Barkeshli said. “With a deeper understanding of the fundamentals of AI, especially from a physicist’s point of view, we could come up with different kinds of curricula for data to train with, different kinds of architectures, different kinds of optimization algorithms—even entirely new paradigms that we haven’t thought of yet.”

 

Original story by Georgia Jiang: https://cmns.umd.edu/news-events/news/umd-physicist-selected-prestigious-simons-collaboration-study-inner-workings   

 

When Superfluids Collide, Physicists Find a Mix of Old and New

Physics is often about recognizing patterns, sometimes repeated across vastly different scales. For instance, moons orbit planets in the same way planets orbit stars, which in turn orbit the center of a galaxy.

When researchers first studied the structure of atoms, they were tempted to extend this pattern down to smaller scales and describe electrons as orbiting the nuclei of atoms. This is true to an extent, but the quirks of quantum physics mean that the pattern breaks in significant ways. An electron remains in a defined orbital area around the nucleus, but unlike a classical orbit, an electron will be found at a random location in the area instead of proceeding along a precisely predictable path.

That electron orbits bear any similarity to the orbits of moons or planets is because all of these orbital systems feature attractive forces that pull the objects together. But a discrepancy arises for electrons because of their quantum nature. Similarly, superfluids—a quantum state of matter—have a dual nature, and to understand them, researchers have had to pin down when they follow the old rules of regular fluids and when they play by their own quantum rules. For instance, superfluids will fill the shape of a container like normal fluids, but their quantum nature lets them escape by climbing vertical walls. Most strikingly, they flow without any friction, which means they can spin endlessly once stirred up.A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)

JQI Fellows Ian Spielman and Gretchen Campbell and their colleagues have been investigating the rich variety of quantum behaviors present in superfluids and exploring ways to utilize them. In a set of recent experiments, they mixed together two superfluids and stumbled upon some unexpected patterns that were familiar from normal fluids. In an article published in Aug. 2025 in the journal Science Advances, the team described the patterns they saw in their experiments, which mirrored the ripples and mushroom clouds that commonly occur when two ordinary fluids with different densities meet.

The team studies a type of superfluid called a Bose-Einstein condensate (BEC). BECs form by cooling many particles down so cold that they all collect into a single quantum state. That consolidation lets all the atoms coordinate and allows the quirks of quantum physics to play out at a much larger scale than is common in nature. The particular BEC they used could easily be separated into two superfluids that provide a convenient way for the team to prepare nearly smooth interfaces, which were useful for seeing mixing patterns balloon from the tiniest seeds of imperfection into a turbulent mess. And the researchers didn’t only find classical fluid behaviors in the quantum world; they also spied the quantum fingerprints hidden beneath the surface. Using the uniquely quantum features of their experiment, they developed a new technique for observing currents along the interface of two superfluids.

“It was really exciting to see how the behavior of normal liquids played out for superfluids, and to invent a new measurement technique leveraging their uniquely quantum behavior,” Spielman says.

To make the two superfluid BECs in the new experiment, the researchers used sodium atoms. Each sodium atom has a spin, a quantum property that makes it act like a little magnet that can either point with or against a magnetic field. Hitting the cooled down cloud of sodium atoms with microwaves produces roughly equal numbers of atoms with spins pointing in opposite directions, which forms two BECs with distinct behaviors. In an uneven magnetic field, the cloud of the two intermingled BECs formed by the microwave pulse will sort itself into two adjacent clouds, with one effectively floating on top of the other; adjusting the field can make the superfluids move around.

This process was old hat in the lab, but, together with a little happenstance, it inspired the new experiment. JQI graduate student Yanda Geng, who is the lead author of the paper, was initially working on another project that required him to smooth out variations of the magnetic field in his setup. To test for magnetic fluctuations, Geng would routinely turn his cloud of atoms into the two BECs and take a snapshot of their distribution. The resulting images caught the eye of JQI postdoctoral researcher Mingshu Zhao, who at the time was working on his own project about turbulence in superfluids. Zhao, who is also an author of the paper, thought that the swirling patterns in the superfluids were reminiscent of turbulence in normal fluids. The snapshots from the calibration didn’t clearly show mushroom clouds, but something about the way the two BECs mixed seemed familiar.

“This is what you call serendipity,” Geng says. “And if you have somebody in the lab who knows what could have happened, they immediately could say, ‘Oh, that's something interesting and probably worth pursuing scientifically.’”

The hints kept appearing as Geng’s original experiment repeatedly hit roadblocks. After months of working on the project, he felt like he was banging his head against a wall. One weekend, another colleague, JQI postdoctoral researcher Junheng Tao, encouraged Geng to mix things up and spend some time exploring the hints of turbulence. Tao, who is also an author of the paper, suggested they intentionally create the two fluids in a stable state and check if they could see patterns forming before the turbulence erupted.

“It was a Sunday, we went into the lab, and we just casually put in some numbers and programmed the experiment, and bam, you see the signal,” Geng says.

The magnetic responses of the two BECs gave Geng and Tao a convenient way to control the superfluids. First, they let magnetism pull the two BECs into a stable configuration in which they lie flush against each other, like oil floating on water. Then, by reversing the way the magnetic field varied across the experiment, the BECs were suddenly pulled in the opposite direction, instantly producing the equivalent of water balanced on top of oil.

After adjusting the field, Geng and Tao were able to take just a single snapshot of the mixing BECs. To get the image, they relied on the fact that the BECs naturally absorb different colors of light. They flashed a color that interacted with just one of the BECs, so they could identify each BEC based on where the light was absorbed. Inconveniently, absorbing the light knocked many atoms out of the BECs, so snapping the image ended the run of the experiment.

By waiting different amounts of time each run, they were able to piece together what was happening as the two BECs mixed. The results revealed the distinctive formation of mushroom clouds that ultimately degenerated into messy turbulence. The researchers determined that despite the many stark differences between BEC superfluids and classical fluids, the BECs recreated a widespread effect, called the Rayleigh-Taylor instability, that is found in normal fluids.

The Rayleigh-Taylor instability describes the process of two distinct fluids needing to exchange places, such as when a dense gas or liquid is on top of a lighter one with gravity pulling it down. The instability produces a pattern of growth of small imperfections in an almost stable state that devolves into unpredictable turbulent mixing. It occurs for water on top of oil, cool dense air over hotter air (as happens after a big explosion) and when layers of material explode out from a star during a supernova. The instability contributes to the iconic “mushroom clouds” observed in the air layers moving above explosions, and similar shapes were found in the BEC.

“At first it's really mind-boggling,” Geng says. “How can it happen here? They’re just completely different things.”

With a little more work, they confirmed they could reliably recreate the behavior and showed that the superfluids in the experiment had all the necessary ingredients to produce the instability. In the experiment, the researchers had effectively substituted magnetism into the role gravity often plays in the creation of the Rayleigh-Taylor instability. This made it convenient to flip the direction of the force at a whim, which made it easy to begin with a calm interface between the fluids and observe the instability balloon from the tiniest seeds of imperfection into turbulent mixing.

The initial result prompted the group to follow up on the project with another experiment exploring a more stable effect at the interface. Instead of completely flipping the force, they kept the “lighter” BEC on top—like oil, or even air, resting on water. By continuously varying the magnetic field at a particular rate, they could shake the interface and create the equivalent of ripples on the surface of a pond. Since the atoms in each BEC all share a quantum state, the ripples have quantum properties and can behave like particles (called ripplons).

But despite the clear patterns resembling mushroom clouds and ripples of normal fluids, the quantum nature of the BECs was still present throughout the experiment. After seeing the familiar behaviors, Geng began to think about the quantum side of the superfluids and turned his attention to something that is normally challenging to do with BECs—measuring the velocity of currents flowing through them.

Geng and his colleagues used the fact that the velocity of a BEC is tied toits phase—a wavelike feature of every quantum state. The phase of a single quantum object is normally invisible, but when multiple phases interact, they can influence what researchers see in experiments. Like waves, if two phases are both at a peak when they meet, they combine, but if a peak meets a trough, they instead cancel out. Or circumstances can produce any of the intermediate forms of combining or partially cancelling out. When different interactions occur at different positions, they create patterns that are often visible in experiments. Geng realized that at the interfaces in his experiment the wavefunctions of the two BECs met and gave them a unique chance to observe interfering BEC phases and determine the velocities of the currents flowing along the interface. 

When the two BECs came together in their experiments, their phases interfered, but the resulting interference pattern remained hidden. However, Geng knew how to translate the hidden interference pattern to something he could see. Hitting the BECs with a microwave pulse could push the sodium atoms into new states where the pattern could be experimentally observed. With that translation, Geng could use his normal snapshot technique to capture an image of the interference between the two phases.

The quantum patterns he saw provide an additional tool for understanding the mixing of superfluids and demonstrate how the familiar Rayleigh-Taylor instability pattern found in the experiment had quantum patterns hidden beneath the surface. The results revealed that despite BEC superfluids being immersed in the quantum world, researchers can still benefit from keeping an eye out for the old patterns familiar from research on ordinary fluids.

“I think it's a very amazing thing for physicists to see the same phenomenon manifest in different systems, even though they are drastically different in their nature,” Geng says.

Original story by Bailey Bedford: https://jqi.umd.edu/news/when-superfluids-collide-physicists-find-mix-old-and-new

In addition to Campbell, who is also the Associate Vice President for Quantum Research and Education at UMD; Spielman; Geng; and Zhao, co-authors of the paper include former JQI postdoctoral researcher Shouvik Mukhherjee and NIST scientist and former JQI postdoctoral researcher Stephen Eckel.

With Passive Approach, New Chips Reliably Unlock Color Conversion

Over the past several decades, researchers have been making rapid progress in harnessing light to enable all sorts of scientific and industrial applications. From creating stupendously accurate clocks to processing the petabytes of information zipping through data centers, the demand for turnkey technologies that can reliably generate and manipulate light has become a global market worth hundreds of billions of dollars.

One challenge that has stymied scientists is the creation of a compact source of light that fits onto a chip, which makes it much easier to integrate with existing hardware. In particular, researchers have long sought to design chips that can convert one color of laser light into a rainbow of additional colors—a necessary ingredient for building certain kinds of quantum computers and making precision measurements of frequency or time.

Now, researchers at JQI have designed and tested new chips that reliably convert one color of light into a trio of hues. Remarkably, the chips all work without any active inputs or painstaking optimization—a major improvement over previous methods. The team described their results in the journal Science on Nov. 6, 2025.

The new chips are examples of photonic devices, which can corral individual photons, the quantum particles of light. Photonic devices split up, route, amplify and interfere streams of photons, much like how electronic devices manipulate the flow of electrons.

“One of the major obstacles in using integrated photonics as an on-chip light source is the lack of versatility and reproducibility,” says JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and a professor of physics at the University of Maryland. “Our team has taken a significant step toward overcoming these limitations.”

The new photonic devices are more than mere prisms. A prism splits multicolored light into its component colors, or frequencies, whereas these chips add entirely new colors that aren’t present in the incoming light. Being able to generate new frequencies of light directly on a chip saves the space and energy that would normally be taken up by additional lasers. And perhaps more importantly, in many cases lasers that shine at the newly generated frequencies don’t even exist.

The ability to generate new frequencies of light on a chip requires special interactions that researchers have been learning to engineer for decades. Ordinarily, the interactions between light and a photonic device are linear, which means the light can be bent or absorbed but its frequency won’t change (as in a prism). By contrast, nonlinear interactions occur when light is concentrated so intensely that it alters the behavior of the device, which in turn alters the light. This feedback can generate a panoply of different frequencies, which can be collected from the output of the chip and used for measurement, synchronization or a variety of other tasks. 

Unfortunately, nonlinear interactions are usually very weak. One of the first observations of a nonlinear optical process was reported in 1961, and it was so weak that someone involved in the publication process mistook the key data for a smudge and removed it from the main figure in the paper. That smudge was the subtle signature of second harmonic generation, in which two photons at a lower frequency are converted into one photon with double the frequency. Related processes can triple the frequency of incoming light, quadruple it, and so forth.

Since that first observation of second harmonic generation, scientists have discovered ways to boost the strength of nonlinear interactions in photonic devices. In the original demonstration, the state of the art was to simply shine a laser on a piece of quartz, taking advantage of the natural electrical properties of the crystal. These days researchers rely on meticulously engineered chips tailored with photonic resonators. The resonators guide the light in tight cycles, allowing it to circulate hundreds of thousands or millions of times before being released. Each single trip through a resonator adds a weak nonlinear interaction, but many trips combine into a much stronger effect. Yet there are still tradeoffs when trying to produce a particular set of new frequencies using a single resonator. 

“If you want to simultaneously have second harmonic generation, third harmonic generation, fourth harmonic—it gets harder and harder,” says Mahmoud Jalali Mehrabad, the lead author of the paper and a former postdoctoral researcher at JQI who is now a research scientist at MIT. “You usually compensate, or you sacrifice one of them to get good third harmonic generation but cannot get second harmonic generation, or vice versa.”

In an effort to avoid some of these tradeoffs, Hafezi and JQI Fellow Kartik Srinivasan, together with Electrical and Computer Engineering Professor Yanne Chembo at the University of Maryland (UMD), have previously pioneered ways of boosting nonlinear effects by using a hoard of tiny resonators that all work in concert. They showed in earlier work how a chip with hundreds of microscopic rings arranged into an array of resonators can amplify nonlinear effects and guide light around its edge. Last year, they showed that a chip patterned with such a grid could transmute a pulsed laser into a nested frequency comb—light with many equally spaced frequencies that is used for all kinds of high-precision measurements. However, it took many iterations to design chips with the right shape to generate the precise frequency comb they were after, and only some of their chips actually worked.

The fact that only a fraction of the chips worked is indicative of the maddening hit-or-miss nature of working with nonlinear devices. Designing a photonic chip requires balancing several things in order to generate an effect like frequency doubling. First, to double the frequency of light, a nonlinear resonator must support both the original frequency and the doubled frequency. Just as a plucked guitar string will only hum with certain tones, an optical resonator only hosts photons with certain frequencies, determined by its size and shape. But once you design a resonator with those frequencies locked in, you must also ensure that they circulate around the resonator at the same speed. If not, they will fall out of sync with each other, and the efficiency of the conversion will suffer.

Together these requirements are known as the frequency-phase matching conditions. In order to produce a useful device, researchers must simultaneously arrange for both conditions to match. Unfortunately, tiny nanometer-sized differences from chip to chip—which even the best chip makers in the world can’t avoid—will shift the resonant frequencies a little bit or change the speed at which they circulate. Those small changes are enough to wash out the finely tuned parameters in a chip and render the design useless for mass production.

One of the authors compared the predicament to the likelihood of spotting a solar eclipse. “If you want to actually see the eclipse, that means if you look up in the sky the moon has to overlap with the sun,” says Lida Xu, a co-lead author and a graduate student in physics at JQI. Getting reliable nonlinear effects out of photonic chips requires a similar kind of chance encounter.

Small misalignments in the frequency-phase matching conditions can be overcome with active compensation that adjusts the material properties of a resonator. But that involves building in little embedded heaters—a solution that both complicates the design and requires a separate power supply.Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)

In the new work, Xu, Mehrabad and their colleagues discovered that the array of resonators used in previous work already increases the chances of satisfying the frequency-phase matching conditions in a passive way—that is, without the use of any active compensation or numerous rounds of design. Instead of trying to engineer the precise frequencies they wanted to create and iterating the design of the chip in hopes of getting one that worked, they stepped back and considered whether the array of resonators produced any stable nonlinear effects across all the chips. When they checked, they were pleasantly surprised to find that their chips would generate second, third and even fourth harmonics for incoming light with a frequency of about 190 THz—a standard frequency used in telecommunications and fiber optic communication.

As they dug into the details, they realized that the reason all their chips worked was related to the structure of their resonator array. Light circulated quickly around the small rings in the array, which set a fast timescale. But there was also a “super-ring” formed by all the smaller rings, and light circulated around it more slowly. Having these two timescales in the chip had an important effect on the frequency-phase matching conditions that they hadn’t appreciated before. Instead of having to rely on meticulous design and active compensation to arrange for a particular frequency-phase matching condition, the two timescales provide researchers with multiple shots at nurturing the necessary interactions. In other words, the two timescales essentially provide the frequency-phase matching for free.

The researchers tested six different chips manufactured on the same wafer by sending in laser light with the standard 190 THz frequency, imaging a chip from above and analyzing the frequencies leaving an output port. They found that each chip was indeed generating the second, third and fourth harmonics, which for their input laser happened to be red, green and blue light. They also tested three single-ring devices. Even with the inclusion of embedded heaters to provide active compensation, they only saw second harmonic generation from one device over a narrow range of heater temperature and input frequency. By contrast, the two-timescale resonator arrays had no active compensation and worked over a relatively broad range of input frequencies. The researchers even showed that as they dialed up the intensity of their input light, the chips started to produce more frequencies around each of the harmonics, reminiscent of the nested frequency comb created in an earlier result.

The authors say that their framework could have broad implications for areas in which integrated photonics are already being used, especially in metrology, frequency conversion and nonlinear optical computing. And it can do it all without the hassle of active tuning or precise engineering to satisfy the frequency-phase matching conditions.

“We have simultaneously relaxed these alignment issues to a huge degree, and also in a passive way,” Mehrabad says. “We don't need heaters; we don't have heaters. They just work. It addresses a long-standing problem.”

Original story by Chris Cesare: With Passive Approach, New Chips Reliably Unlock Color Conversion | Joint Quantum Institute

In addition to Mehrabad, Hafezi, Srinivasan (who is also a Fellow of the National Institute of Standards and Technology), Chembo and Xu, the paper had several other authors: Gregory Moille, an associate research scientist at JQI; Christopher Flower, a former graduate student at JQI who is now a researcher at the Naval Research Laboratory; Supratik Sarkar, a graduate student in physics at JQI; Apurva Padhye, a graduate student in physics at JQI; Shao-Chien Ou, a graduate student in physics at JQI; Daniel Suarez-Forero, a former JQI postdoctoral researcher who is now an assistant professor of physics at the University of Maryland, Baltimore County; and Mahdi Ghafariasl, a postdoctoral researcher at JQI.

This research was funded by the Air Force Office of Scientific Research, the Army Research Office, the National Science Foundation and the Office of Naval Research.

Chung Yun Chang (1929 - 2025)

Professor Emeritus Chung Yun Chang died on October 29, 2025, in San Diego, California. He was 95.

Prof. Chang was a native of rural Hunan, China. He received a bachelor’s degree at National Taiwan University and a Ph.D. at Columbia University in 1965.  

Prof. Chang joined the University of Maryland Physics department in the mid-1960s and worked with George Snow and Bob Glasser on the analysis of bubble chamber data. In those days almost the entire 4th floor of the Toll Physics Building consisted of bubble chamber scanning and measuring tables. Those were the days of establishing the properties of elementary particles that eventually led to the current Standard Model of Particle Physics. In the late 1960’s Prof. Chang and his coworkers worked on a Kp  Bubble Chamber exposure at Brookhaven National Lab to study decays and results that were inconsistent with an |ΔI| = ½ rule. The analysis of this exposure continued for a long time and the film existed at Maryland until the 1990’s when it was finally mined for its silver content.

With the advent of Fermilab, Prof. Chang and the Maryland group worked on a number of bubble chamber experiments at Fermilab. Fermilab experiment E2B was a hybrid spectrometer experiment with optical spark chambers measuring forward tracks produced by 100 Gev π− interactions in the Argonne 30” hydrogen bubble chamber. The spark chambers and the bubble chamber were triggered if two or more forward tracks were detected by dE/dx deposits in 3 independent scintillation counters, indicating the presence of a high multiplicity event. A hybrid triggered system avoided taking photos of uninteresting events. Profs. Chang, Snow and Glasser were joined by Phil Steinberg on this experiment.

Prof. Chang also worked with Prof. Steinberg on a Magnetized Beam Dump experiment at Fermilab looking for neutral heavy leptons at this time.

George Snow conceived of a search for Charm in the 15 foot Fermilab Bubble Chamber filled with deuterium before the discovery of the J/Ψ. Although the proposal was accepted it was delayed for many years. Prof. Chang and his coworkers did find several charm candidate events when the Bubble Chamber finally took data, but by then Charm was no longer just a conjecture. The Standard Model was on its way to being finalized.

After the discovery of the ϒ at Fermilab, and the proposal of QCD as the underpinning of the strong interactions, the Standard Model was heading towards completion. A set of experiments at DESY in Hamburg, Germany established the existence of the gluon, the particle that binds the quarks in strong interactions. Prof. Chang worked with Gus and Bice Zorn, Andris Skuja and Prof. Glasser on the PLUTO experiment at PETRA. PETRA was an e+e- collider. In 1979, three experiments at PETRA observed 3-particle jet events that were consistent with gluon production. Later the four experiments that operated at PETA were awarded a special EPS prize for the discovery and characterization of the gluon in strong interactions. PLUTO made many early contributions to our understanding of QCD and particle jet fragmentation as well as introducing the study of γγ production of hadrons.

After PLUTO on PETRA, Prof. Chang worked on the OPAL experiment at LEP (the Large Electron Positron collider) at CERN, Geneva, Switzerland.  While waiting for OPAL to begin data taking, Prof. Chang worked with Prof. Steinberg to find evidence for muonium and antimuonium oscillations. They did not find such evidence but for a while they had the best limits for non-existence of the phenomenon.

At LEP, Prof. Chang worked with Prof. Snow on the Z line-shape. The Maryland group had a major role in the OPAL experiment, leading the construction of the hadron calorimeter among other contributions. The analysis of the data from OPAL and other three experiments at the Z pole and later at higher energies led to the most precise measurements of the Electroweak interactions, validating the Standard Model predictions. Working with his students, Prof. Chang carried out studies of Z line-shape and its decay properties, and searches for new particles beyond the Standard Model.

After his retirement in 1997, Prof. Chang continued to do research, and had a deep interest in neutrino mixing studies. He was a Fellow of the American Physical Society.

Further information is posted here: https://www.dignitymemorial.com/funeral-homes/california/san-diego/pacific-beach-la-jolla-chapel/9560