A Billion Tiny Pendulums Could Detect the Universe’s Missing Mass

Researchers at the National Institute of Standards and Technology (NIST), the University of Maryland's Joint Quantum Institute (JQI), and their colleagues have proposed a novel method for finding dark matter, the cosmos’s mystery material that has eluded detection for decades. Dark matter makes up about 27% of the universe; ordinary matter, such as the stuff that builds stars and planets, accounts for just 5% of the cosmos. (A mysterious entity called dark energy, accounts for the other 68%.)

According to cosmologists, all the visible material in the universe is merely floating in a vast sea of dark matter—particles that are invisible but nonetheless have mass and exert a gravitational force. Dark matter’s gravity would provide the missing glue that keeps galaxies from falling apart and account for how matter clumped together to form the universe’s rich galactic tapestry. 

The proposed experiment, in which a billion millimeter-sized pendulums would act as dark matter sensors, would be the first to hunt for dark matter solely through its gravitational interaction with visible matter. The experiment would be one of the few to search for dark matter particles with a mass as great as that of a grain of salt, a scale rarely explored and never studied by sensors capable of recording tiny gravitational forces.

Previous experiments have sought dark matter by looking for nongravitational signs of interactions between the invisible particles and certain kinds of ordinary matter. That’s been the case for searches for a hypothetical type of dark matter called the WIMP (weakly interacting massive particles), which was a leading candidate for the unseen material for more than two decades. Physicists looked for evidence that when WIMPs occasionally collide with chemical substances in a detector, they emit light or kick out electric charge. 

Dark matter, the hidden stuff of our universe, is notoriously difficult to detect. In search of direct evidence, NIST researchers have proposed using a 3D array of pendulums as force detectors, which could detect the gravitational influence of passing dark matter particles. When a dark matter particle is near a suspended pendulum, the pendulum should deflect slightly due to the attraction of both masses. However, this force is very small, and difficult to isolate from environmental noise that causes the pendulum to move. To better isolate the deflections from passing particles, NIST researchers propose using a pendulum array. Environmental noise affects each pendulum individually, causing them to move independently. However, particles passing through the array will produce correlated deflections of the pendulums. Because these movements are correlated, they can be isolated from the background noise, revealing how much force a particle delivers to each pendulum and the particle’s speed and direction, or velocity.

Researchers hunting for WIMPs in this way have either come up empty-handed or garnered inconclusive results; the particles are too light (theorized to range in mass between that of an electron and a proton) to detect through their gravitational tug. 

With the search for WIMPs seemingly on its last legs, researchers at NIST and their colleagues are now considering a more direct method to look for dark matter particles that have a heftier mass and therefore wield a gravitational force large enough to be detected.

“Our proposal relies purely on the gravitational coupling, the only coupling we know for sure that exists between dark matter and ordinary luminous matter,” said study co-author Daniel Carney, a theoretical physicist jointly affiliated with JQI, the National Institute of Standards and Technology (NIST), the Joint Center for Quantum Information and Computer Science (QuICS), and the Fermi National Accelerator Laboratory. 

The researchers—who also include Adjunct Professor Jacob Taylor, Sohitri Ghosh of JQI and QuICS, and Gordan Krnjaic of the Fermi National Accelerator Laboratory—calculate that their method can search for dark matter particles with a minimum mass about half that of a grain of salt, or about a billion billion times the mass of a proton. The scientists reported their findings recently in Physical Review D(link is external).

Because the only unknown in the experiment is the mass of the dark matter particle, not how it couples to ordinary matter, “if someone builds the experiment we suggest, they either find dark matter or rule out all dark matter candidates over a wide range of possible masses,” said Carney. The experiment would be sensitive to particles ranging from about 1/5,000 of a milligram to a few milligrams. 

That mass scale is particularly interesting because it covers the so-called Planck mass, a quantity of mass determined solely by three fundamental constants of nature and equivalent to about 1/5,000 of a gram. 

Carney, Taylor and their colleagues propose two schemes for their gravitational dark matter experiment. Both involve tiny, millimeter-size mechanical devices acting as exquisitely sensitive gravitational detectors. The sensors would be cooled to temperatures just above absolute zero to minimize heat-related electrical noise and shielded from cosmic rays and other sources of radioactivity. In one scenario, a myriad of highly sensitive pendulums would each deflect slightly in response to the tug of a passing dark matter particle.

Similar devices (with much larger dimensions) have already been employed in the recent Nobel-prize-winning detection of gravitational waves, ripples in the fabric of space-time predicted by Einstein’s theory of gravity. Carefully suspended mirrors, which act like pendulums, move less than the length of an atom in response to a passing gravitational wave. 

In another strategy, the researchers propose using spheres levitated by a magnetic field or beads levitated by laser light. In this scheme, the levitation is switched off as the experiment begins, so that the spheres or beads are in free fall. The gravity of a passing dark matter particle would ever so slightly disturb the path of the free-falling objects. 

“We are using the motion of objects as our signal,” said Taylor. “This is different from essentially every particle physics detector out there.” 

The researchers calculate that an array of about a billion tiny mechanical sensors distributed over a cubic meter is required to differentiate a true dark matter particle from an ordinary particle or spurious random electrical signals or “noise” triggering a false alarm in the sensors. Ordinary subatomic particles such as neutrons (interacting through a nongravitational force) would stop dead in a single detector. In contrast, scientists expect a dark matter particle, whizzing past the array like a miniature asteroid, would gravitationally jiggle every detector in its path, one after the other. 

Noise would cause individual detectors to move randomly and independently rather than sequentially, as a dark matter particle would. As a bonus, the coordinated motion of the billion detectors would reveal the direction the dark matter particle was headed as it zoomed through the array.

To fabricate so many tiny sensors, the team suggests that researchers may want to borrow techniques that the smartphone and automotive industries already use to produce large numbers of mechanical detectors.

Thanks to the sensitivity of the individual detectors, researchers employing the technology needn’t confine themselves to the dark side. A smaller-scale version of the same experiment could detect the weak forces from distant seismic waves as well as that from the passage of ordinary subatomic particles, such as neutrinos and single, low-energy photons (particles of light). 

The smaller-scale experiment could even hunt for dark matter particles—if they impart a large enough kick to the detectors through a nongravitational force, as some models predict, Carney said. 

“We are setting the ambitious target of building a gravitational dark matter detector, but the R&D needed to achieve that would open the door for many other detection and metrology measurements,” said Carney. 

Researchers at other institutions have already begun conducting(link is external) preliminary experiments using the NIST team’s blueprint.

This story was originally published by NIST News(link is external). It has been adapted with minor changes here. JQI is a research partnership between UMD and NIST, with the support and participation of the Laboratory for Physical Sciences

Reference Publication: 
"Proposal for gravitational direct detection of dark matter,"Daniel Carney, Sohitri Ghosh, Gordan Krnjaic, Jacob M. Taylor., Phys. Rev. D, 102, 072003 (2020)

 

Mind and Space Bending Physics on a Convenient Chip

Thanks to Einstein, we know that our three-dimensional space is warped and curved. And in curved space, normal ideas of geometry and straight lines break down, creating a chance to explore an unfamiliar landscape governed by new rules. But studying how physics plays out in a curved space is challenging: Just like in real estate, location is everything.

“We know from general relativity that the universe itself is curved in various places,” says Assistant Professor JQI Fellow Alicia Kollár, who is also a Fellow of the Joint Quantum Institute and the Quantum Technology Center. “But, any place where there's actually a laboratory is very weakly curved because if you were to go to one of these places where gravity is strong, it would just tear the lab apart.”

Spaces that have different geometric rules than those we usually take for granted are called non-Euclidean(link is external). If you could explore non-Euclidean environments, you would find perplexing landscapes. Space might contract so that straight, parallel lines draw together instead of rigidly maintaining a fixed spacing. Or it could expand so that they forever grow further apart. In such a world, four equal-length roads that are all connected by right turns at right angles might fail to form a square block that returns you to your initial intersection.On the left is a representation of a grid of heptagons in a hyperbolic space. To fit the uniform hyperbolic grid into “flat” space, the size and shape of the heptagons are distorted. In the appropriate hyperbolic space, each heptagon would have an identical shape and size, instead of getting smaller and more distorted toward the edges. On the right is a circuit that simulates a similar hyperbolic grid by directing microwaves through a maze of zig-zagging superconducting resonators. (Credit: Springer Nature; Produced by Princeton, Houck Lab)On the left is a representation of a grid of heptagons in a hyperbolic space. To fit the uniform hyperbolic grid into “flat” space, the size and shape of the heptagons are distorted. In the appropriate hyperbolic space, each heptagon would have an identical shape and size, instead of getting smaller and more distorted toward the edges. On the right is a circuit that simulates a similar hyperbolic grid by directing microwaves through a maze of zig-zagging superconducting resonators. (Credit: Springer Nature; Produced by Princeton, Houck Lab)

These environments overturn core assumptions of normal navigation and can be impossible to accurately visualize. Non-Euclidean geometries are so alien that they have been used in videogames and horror stories(link is external) as unnatural landscapes that challenge or unsettle the audience.

But these unfamiliar geometries are much more than just distant, otherworldly abstractions. Physicists are interested in new physics that curved space can reveal, and non-Euclidean geometries might even help improve designs of certain technologies. One type of non-Euclidean geometry that is of interest is hyperbolic space—also called negatively-curved space. Even a two-dimensional, physical version of a hyperbolic space is impossible to make in our normal, “flat” environment. But scientists can still mimic hyperbolic environments to explore how certain physics plays out in negatively curved space.

In a recent paper in Physical Review A, a collaboration between the groups of Kollár and JQI Fellow Alexey Gorshkov, who is also a physicist at the National Institute of Standards and Technology, presented new mathematical tools to better understand simulations of hyperbolic spaces. The research builds on Kollár’s previous experiments(link is external) to simulate orderly grids in hyperbolic space by using microwave light contained on chips. Their new toolbox includes what they call a “dictionary between discrete and continuous geometry” to help researchers translate experimental results into a more useful form. With these tools, researchers can better explore the topsy-turvy world of hyperbolic space.

The situation isn’t precisely like Alice falling down the rabbit hole, but these experiments are an opportunity to explore a new world where surprising discoveries might be hiding behind any corner and the very meaning of turning a corner must be reconsidered.

“There are really many applications of these experiments,” says JQI postdoctoral researcher Igor Boettcher, who is the first author of the new paper. “At this point, it's unforeseeable what all can be done, but I expect that it will have a lot of rich applications and a lot of cool physics.”

A Curved New World

In flat space, the shortest distance between two points is a straight line, and parallel lines will never intersect—no matter how long they are. In a curved space, these basics of geometry no longer hold true. The mathematical definitions of flat and curved are similar to the day to day meaning when applied to two dimensions. You can get a feel for the basics of curved spaces by imagining—or actually playing around with—pieces of paper or maps.

For instance, the surface of a globe (or any ball) is an example of a two-dimensional positively curved space. And if you try to make a flat map into a globe, you end up with excess paper wrinkling up as you curve it into a sphere. To have a smooth sphere you must lose the excess space, resulting in parallel lines eventually meeting, like the lines of longitude that start parallel at the equator meeting at the two poles. Due to this loss, you can think of a positively curved space as being a less-spacy space than flat space.

Hyperbolic space is the opposite of a positively curved space—a more-spacy space. A hyperbolic space curves away from itself at every point. Unfortunately, there isn’t a hyperbolic equivalent of a ball that you can force a two-dimensional sheet into; it literally won’t fit into the sort of space that we live in.

The best you can do is make a saddle (or a Pringle) shape where the surrounding sheet hyperbolically curves away from the center point. Making every point on a sheet similarly hyperbolic is impossible; there isn’t a way to keep curving and adding paper to create a second perfect saddle point without it bunching up and distorting the first hyperbolic saddle point.

The extra space of a hyperbolic geometry makes it particularly interesting since it means that there is more room for forming connections. The differences in the possible paths between points impacts how particles interact and what sort of uniform grid—like the heptagon grid shown above—can be made. Taking advantage of the extra connections that are possible in a hyperbolic space can make it harder to completely cut sections of a grid off from each other, which might impact designs of networks like the internet(link is external).

Navigating Labyrinthine Circuits

Since it is impossible to physically make a hyperbolic space on Earth, researchers must settle for creating lab experiments that reproduce some of the features of curved space. Kollár and colleagues previously showed that they can simulate a uniform, two-dimensional curved space. The simulations are performed using circuits (like the one shown above) that serve as a very organized maze for microwaves to travel through.

A feature of the circuits is that microwaves are indifferent to the shapes of the resonators that contain them and are just influenced by the total length. It also doesn’t matter at what angle the different paths connect. Kollár realized that these facts mean the physical space of the circuit can effectively be stretched or squeezed to create a non-Euclidean space—at least as far as the microwaves are concerned.

In their prior work, Kollár and colleagues were able to create mazes with various zigs-zagging path shapes and to demonstrate that the circuits simulated hyperbolic space. Despite the convenience and orderliness of the circuits they used, the physics playing out in them still represents a strange new world that requires new mathematical tools to efficiently navigate.

Hyperbolic spaces offer different mathematical challenges to physicists than the Euclidean spaces in which they normally work. For instance, researchers can’t use the standard physicist trick of imagining a lattice getting smaller and smaller to figure out what happens for an infinitely small grid, which should act like a smooth, continuous space. This is because in a hyperbolic space the shape of the lattice changes with its size due to the curving of the space. The new paper establishes mathematical tools, such as a dictionary between discrete and continuous geometry, to circumvent these issues and make sense of the results of simulations.

With the new tools, researchers can get exact mathematical descriptions and predictions instead of just making qualitative observations. The dictionary allows them to study continuous hyperbolic spaces even though the simulation is only of a grid. With the dictionary, researchers can take a description of microwaves traveling between the distinct points of the grid and translate them into an equation describing smooth diffusion, or convert mathematical sums over all the sites on the grid to integrals, which is more convenient in certain situations.

“If you give me an experiment with a certain number of sites, this dictionary tells you how to translate it to a setting in continuous hyperbolic space,” Boettcher says. “With the dictionary, we can infer all the relevant parameters you need to know in the laboratory setup, especially for finite or small systems, which is always experimentally important.”

With the new tools to help understand simulation results, researchers are better equipped to answer questions and make discoveries with the simulations. Boettcher says he’s optimistic about the simulations being useful for investigating the AdS/CFT correspondence(link is external), a physics conjecture for combining theories of quantum gravity and quantum field theories using a non-Euclidean description of the universe. And Kollár plans to explore if these experiments can reveal even more physics by incorporating interactions into the simulations.

“The hardware opened up a new door,” Kollár says. “And now we want to see what physics this will let us go to.”

Research Contact: Igor Boettcher, This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
 
Media Contact:  Bailey Bedford, This email address is being protected from spambots. You need JavaScript enabled to view it.
(link sends e-mail)
 

Quantum Matchmaking: New NIST System Detects Ultra-Faint Communications Signals Using the Principles of Quantum Physics

Researchers at the National Institute of Standards and Technology (NIST), the Department of Physics at the University of Maryland (UMD) and JQI have devised and demonstrated a system that could dramatically increase the performance of communications networks while enabling record-low error rates in detecting even the faintest of signals. The work could potentially decrease the total amount of energy required for state-of-the-art networks by a factor of 10 to 100. 

The proof-of-principle system consists of a novel receiver and corresponding signal-processing technique that, unlike the methods used in today’s networks, are entirely based on the properties of quantum physics and thereby capable of handling even extremely weak signals with pulses that carry many bits of data.The incoming signal (red, lower left) proceeds through a beam splitter to the photon detector, which has an attached time register (top right). The receiver sends the reference beam to the beam splitter to cancel the incoming pulse so that no light is detected. If even one photon is detected, it means that the receiver used an incorrect reference beam, which needs to be adjusted. The receiver uses exact times of photon detection to arrive at the right adjustment with fewer guesses. The combination of recorded detection times and the history of reference beam frequencies are used to find the frequency of the incoming signal. (Credit: NIST)The incoming signal (red, lower left) proceeds through a beam splitter to the photon detector, which has an attached time register (top right). The receiver sends the reference beam to the beam splitter to cancel the incoming pulse so that no light is detected. If even one photon is detected, it means that the receiver used an incorrect reference beam, which needs to be adjusted. The receiver uses exact times of photon detection to arrive at the right adjustment with fewer guesses. The combination of recorded detection times and the history of reference beam frequencies are used to find the frequency of the incoming signal. (Credit: NIST)

“We built the communication test bed using off-the-shelf components to demonstrate that quantum-measurement-enabled communication can potentially be scaled up for widespread commercial use,” said Ivan Burenkov, a research scientist at JQI. Burenkov and his colleagues report the results in Physical Review X Quantum(link is external). “Our effort shows that quantum measurements offer valuable, heretofore unforeseen advantages for telecommunications leading to revolutionary improvements in channel bandwidth and energy efficiency.”

Modern communications systems work by converting information into a laser-generated stream of digital light pulses in which information is encoded—in the form of changes to the properties of the light waves—for transfer and then decoded when it reaches the receiver. The train of pulses grows fainter as it travels along transmission channels, and conventional electronic technology for receiving and decoding data has reached the limit of its ability to precisely detect the information in such attenuated signals.

The signal pulse can dwindle until it is as weak as a few photons—or even less than one on average. At that point, inevitable random quantum fluctuations called “shot noise” make accurate reception impossible by normal (“classical,” as opposed to quantum) technology because the uncertainty caused by the noise makes up such a large part of the diminished signal. As a result, existing systems must amplify the signals repeatedly along the transmission line, at considerable energy cost, keeping them strong enough to detect reliably. 

The NIST team’s system can eliminate the need for amplifiers because it can reliably process even extremely feeble signal pulses: “The total energy required to transmit one bit becomes a fundamental factor hindering the development of networks,” said Sergey Polyakov, senior scientist on the NIST team and an adjunct associate professor of physics at UMD. “The goal is to reduce the sum of energy required by lasers, amplifiers, detectors, and support equipment to reliably transmit information over longer distances. In our work here we demonstrated that with the help of quantum measurement even faint laser pulses can be used to communicate multiple bits of information—a necessary step towards this goal.”

To increase the rate at which information can be transmitted, network researchers are finding ways to encode more information per pulse by using additional properties of the light wave. So a single laser light pulse, depending on how it was originally prepared for transmission, can carry multiple bits of data. To improve detection accuracy, quantum-enhanced receivers can be fitted onto classical network systems. To date, those hybrid combinations can process up to two bits per pulse. The NIST quantum system uses up to 16 distinct laser pulses to encode as many as four bits.

To demonstrate that capability, the NIST researchers created an input of faint laser pulses comparable to a substantially attenuated conventional network signal, with the average number of photons per pulse from 0.5 to 20 (though photons are whole particles, a number less than one simply means that some pulses contain no photons). 

After preparing this input signal, the NIST researchers take advantage of its wavelike properties, such as interference, until it finally hits the detector as photons (particles). In the realm of quantum physics, light can act as either particles (photons) or waves, with properties such as frequency and phase (the relative positions of the wave peaks). 

Inside the receiver, the input signal’s pulse train combines (interferes) with a separate, adjustable reference laser beam, which controls the frequency and phase of the combined light stream. It is extremely difficult to read the different encoded states in such a faint signal. So the NIST system is designed to measure the properties of the whole signal pulse by trying to match the properties of the reference laser to it exactly. The researchers achieve this through a series of successive measurements of the signal, each of which increases the probability of an accurate match.

That is done by adjusting the frequency and phase of the reference pulse so that it   interferes destructively with the signal when they are combined at the beam splitter, canceling the signal out completely so no photons can be detected. In this scheme, shot noise is not a factor: Total cancellation has no uncertainty. 

Thus, counterintuitively, a perfectly accurate measurement results in no photon reaching the detector. If the reference pulse has the wrong frequency, a photon can reach the detector. The receiver uses the time of that photon detection to predict the most probable signal frequency and adjusts the frequency of the reference pulse accordingly. If that prediction is still incorrect, the detection time of the next photon results in a more accurate prediction based on both photon detection times, and so on. 

“Once the signal interacts with the reference beam, the probability of detecting a photon varies in time,” Burenkov said, “and consequently the photon detection times contain information about the input state. We use that information to maximize the chance to guess correctly after the very first photon detection.

“Our communication protocol is designed to give different temporal profiles for different combinations of the signal and reference light. Then the detection time can be used to distinguish between the input states with some certainty. The certainty can be quite low at the beginning, but it is improved throughout the measurement. We want to switch the reference pulse to the right state after the very first photon detection because the signal contains just a few photons, and the longer we measure the signal with the correct reference, the better our confidence in the result is.”

Polyakov discussed the possible applications. “The future exponential growth of the internet will require a paradigm shift in the technology behind communications,” he said. “Quantum measurement could become this new technology. We demonstrated record low error rates with a new quantum receiver paired with the optimal encoding protocol. Our approach could significantly reduce energy for telecommunications.”

This story was originally published by NIST News(link is external). It has been adapted with minor changes here. JQI is a research partnership between UMD and NIST, with the support and participation of the Laboratory for Physical Sciences.

Reference publication: "Time-Resolving Quantum Measurement Enables Energy-Efficient, Large-Alphabet Communication," I.A. Burenkov, M.V. Jabir, A. Battou, S.V. Polyakov, PRX Quantum, 1, 010308 (2020)

QMC Team Discovers New Topological Phase of Matter

A collaboration between the Quantum Materials Center (QMC) and the NIST Center for Neutron Research, led by QMC graduate student I-Lin Liu, has just published results reporting the discovery of a new topoloa Six layers of Td–T' periodic superstructure, consisting of three layers of Td and T' phases with L–L interface. b Three layers of Td and T' slabs, separated (top) and joined (bottom). c Fermi surface obtained from separated (top) and joined slabs (bottom). d—top: The difference in the Fermi surfaces of the separated (c—top) and joined slabs (c—bottom), directly indicating the states due to the Td–T' interface. Similarly, (d—bottom) shows the interface Fermi pockets from the periodic superstructure shown in (a). The middle panel in (d) shows the quantum oscillations from the Td–T' joint slab calculations (b—bottom) compared with the experimental frequencies, which are represented as Gaussian curves with equal but arbitrary intensities.a Six layers of Td–T' periodic superstructure, consisting of three layers of Td and T' phases with L–L interface. b Three layers of Td and T' slabs, separated (top) and joined (bottom). c Fermi surface obtained from separated (top) and joined slabs (bottom). d—top: The difference in the Fermi surfaces of the separated (c—top) and joined slabs (c—bottom), directly indicating the states due to the Td–T' interface. Similarly, (d—bottom) shows the interface Fermi pockets from the periodic superstructure shown in (a). The middle panel in (d) shows the quantum oscillations from the Td–T' joint slab calculations (b—bottom) compared with the experimental frequencies, which are represented as Gaussian curves with equal but arbitrary intensities.gical phase in the layered transition metal chalcogenide MoTe2, a promising host of electronic Weyl nodes and topological superconductivity.

MoTe2 harbors both noncentrosymmetric Td and centrosymmetric T’ structural phases, both of which have been identified as topologically nontrivial. However, Liu and colleagues demonstrated via quantum oscillations and neutron scattering measurements, and first-principles calculations, how applied pressure drives MoTe2 between the Td and T’ phases, through an intermediate mixed-phase region. The mixed-phase region gives rise to a network of topological interface states that yield quantum oscillations that survive despite the strong structural disorder, opening the possibility of stabilizing multiple topological phases coexisting with superconductivity.

This work is published in npj Quantum Materials.

Heaviest Black Hole Merger is Among Three Recent Gravitational Wave Discoveries

Scientists observed what appears to be a bulked-up black hole tangling with a more ordinary one. The research team, which includes physicists from the University of Maryland, detected two black holes merging, but one of the black holes was 1 1/2 times more massive than any ever observed in a black hole collision. The researchers believe the heavier black hole in the pair may be the result of a previous merger between two black holes.

Numerical simulation of two black holes that spiral inwards and merge, emitting gravitational waves. The simulated gravitational wave signal is consistent with the observation made by the LIGO and Virgo gravitational wave detectors on May 21st, 2019 (GW190521). Image Copyright © N. Fischer, H. Pfeiffer, A. Buonanno (Max Planck Institute for Gravitational Physics), Simulating eXtreme Spacetimes (SXS) Collaboration.

This type of hierarchical combining of black holes has been hypothesized in the past but the observed event, labeled GW190521, would be the first evidence for such activity. The Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration (LSC) and Virgo Collaboration announced the discovery in two papers published September 2, 2020, in the journals Physical Review Letters and Astrophysical Journal Letters.

The scientists identified the merging black holes by detecting the gravitational waves—ripples in the fabric of space-time—produced in the final moments of the merger. The gravitational waves from GW190521 were detected on May 21, 2019, by the twin LIGO detectors located in Livingston, Louisiana, and Hanford, Washington, and the Virgo detector located near Pisa, Italy.Numerical simulation of two black holes that spiral inwards and merge, emitting gravitational waves. The simulated gravitational wave signal is consistent with the observation made by the LIGO and Virgo gravitational wave detectors on May 21st, 2019 (GW190521). Image Copyright © N. Fischer, H. Pfeiffer, A. Buonanno (Max Planck Institute for Gravitational Physics), Simulating eXtreme Spacetimes (SXS) Collaboration.  Numerical simulation of two black holes that spiral inwards and merge, emitting gravitational waves. The simulated gravitational wave signal is consistent with the observation made by the LIGO and Virgo gravitational wave detectors on May 21st, 2019 (GW190521). Image Copyright © N. Fischer, H. Pfeiffer, A. Buonanno (Max Planck Institute for Gravitational Physics), Simulating eXtreme Spacetimes (SXS) Collaboration.

“The mass of the larger black hole in the pair puts it into the range where it’s unexpected from regular astrophysics processes,” said Peter Shawhan, an LSC principal investigator and the LSC observational science coordinator. “It seems too massive to have been formed from a collapsed star, which is where black holes generally come from.”

The larger black hole in the merging pair has a mass 85 times greater than the sun. One possible scenario suggested by the new papers is that the larger object may have been the result of a previous black hole merger rather than a single collapsing star. According to current understanding, stars that could give birth to black holes with masses between 65 and 135 times greater than the sun don’t collapse when they die. Therefore, we don’t expect them to form black holes.

“Right from the beginning, this signal, which is only a tenth of a second long, challenged us in identifying its origin,” said Alessandra Buonanno, a College Park professor at UMD and an LSC principal investigator who also has an appointment as Director at the Max Planck Institute for Gravitational Physics in Potsdam, Germany. “But, despite its short duration, we were able to match the signal to one expected of black-hole mergers, as predicted by Einstein’s theory of general relativity, and we realized we had witnessed, for the first time, the birth of an intermediate-mass black hole from a black-hole parent that most probably was born from an earlier binary merger.”

GW190521 is one of three recent gravitational wave discoveries that challenge current understanding of black holes and allow scientists to test Einstein’s theory of general relativity in new ways. The other two events included the first observed merger of two black holes with distinctly unequal masses and a merger between a black hole and a mystery object, which may be the smallest black hole or the largest neutron star ever observed. A research paper describing the latter was published in Astrophysical Journal Letters on June 23, 2000, while a paper about the former event will be published soon in Physical Review D.

“All three events are novel with masses or mass ratios that we’ve never seen before,” said Shawhan, who is also a fellow of the Joint Space-Science Institute, a partnership between UMD and NASA’s Goddard Space Flight Center. “So not only are we learning more about black holes in general, but because of these new properties, we are able to see effects of gravity around these compact bodies that we haven't seen before. It gives us an opportunity to test the theory of general relativity in new ways.”

For example, the theory of general relativity predicts that binary systems with distinctly unequal masses will produce gravitational waves with higher harmonics, and that is exactly what the scientists were able to observe for the first time.

“What we mean when we say higher harmonics is like the difference in sound between a musical duet with musicians playing the same instrument versus different instruments,” said Buonanno, who developed the waveform models to observe the harmonics with her LSC group. “The more substructure and complexity the binary has — for example the masses or spins of the black holes are different—the richer is the spectrum of the radiation emitted”

In addition to these three black hole mergers and a previously reported binary neutron star merger, the observational run from April 2019 through March 2020 identified 52 other potential gravitational wave events. The events were posted to a public alert system developed by LIGO and Virgo collaboration members in a program originally spearheaded by Shawhan so that other scientists and interested members of the public can evaluate the gravity wave signals.

“Gravitational wave events are being detected regularly,” Shawhan said, “and some of them are turning out to have remarkable properties which are extending what we can learn about astrophysics.”

 

 

Watch a numerical simulation here: https://youtu.be/zRmwtL6lvIM

The research paper, “GW190521: A Binary Black Hole Coalescence with a Total Mass of 150 Solar Masses,” was published in Physical Review Letters on September 2, 2020.

The research paper, ”Properties and Astrophysical Implications of the 150 Solar Mass Binary Black Hole Merger GW190521,” was published in Astrophysical Journal Letters on September 2, 2020.

The research paper, “GW190814: Gravitational Waves from the Coalescence of a 23 Solar Mass Black Hole with a 2.6 Solar Mass Compact Object,” was published in Astrophysical Journal Letters on June 23, 2020.

The research paper, “GW190412: Observation of a Binary-Black-Hole Coalescence with Asymmetric Masses,” has been accepted for publication in Physical Review D, and was published on Arxiv on April 17, 2020.

About LIGO and Virgo

LIGO is funded by the NSF and operated by Caltech and MIT, which conceived of LIGO and lead the project. Financial support for the Advanced LIGO project was led by the NSF with Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council) and Australia (Australian Research Council-OzGrav) making significant commitments and contributions to the project. Approximately 1,300 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. A list of additional partners is available at https://my.ligo.org/census.php

The Virgo Collaboration is currently composed of approximately 550 members from 106 institutes in 12 different countries including Belgium, France, Germany, Hungary, Italy, the Netherlands, Poland, and Spain. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa in Italy, and is funded by Centre National de la Recherche Scientifique (CNRS) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and Nikhef in the Netherlands. A list of the Virgo Collaboration groups can be found at http://public.virgo-gw.eu/the-virgo-collaboration/. More information is available on the Virgo website at http://www.virgo-gw.eu/.

Original story: https://cmns.umd.edu/news-events/features/4651#overlay-context=departments
Media Relations Contact:
 Kimbra Cutlip, 301-405-9463, This email address is being protected from spambots. You need JavaScript enabled to view it.