Researchers Narrow Down Mass of Sought-After Axion Particle
May 9, 2022
By Elizabeth Ball
Contact: cscomms@lbl.gov
How much matter is in a bit of dark matter – and how do we know, when we don’t even know what dark matter is? Researchers at the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Laboratory (Berkeley Lab), in collaboration with colleagues at other institutions, are one step closer to finding out. Using a new application of high-performance computing to the problem, they’ve narrowed down the range for the mass of the axion, a theoretical particle that may make up much of the dark matter in the universe.
Using 5.2 million CPU hours and 69,632 physical CPU cores on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC), the simulation is one of the largest ever performed in the pursuit of understanding dark matter. The results were published in Nature Communications in February 2022.
First theorized in 1977 and named after a laundry detergent (really!), the axion is one of several particles that, if it exists, may make up dark matter, the hypothetical form of matter that is believed to comprise 85% of matter in the universe. Dark matter is considered “dark” because it doesn’t interact with the electromagnetic spectrum of energy – among other things, it doesn’t absorb, refract, or reflect visible light. There are other possible answers to the question of what dark matter is made of, but the axion is of particular interest because it arises naturally from the Standard Model of particle physics.
In the 40-year hunt for dark matter and its makeup, one key variable is missing: the mass of the axion. But the equations to find the mass are nonlinear and must be solved numerically, the equivalent to finding an extremely tiny invisible needle in a haystack made of…everything.
“What we're trying to do is to predict the axion mass, where the axion is one of the most sought-after dark matter candidates at present,” said Princeton University researcher Malte Buschmann, first author of the Nature paper. “The problem most experiments face is that, in order to have a measurable detection, you need to hit a very specific frequency, and that frequency depends on the axion mass, so if you don't know what the axion mass is, you basically have to test every single resonance frequency. That's incredibly cumbersome, and one of the main reasons that the axion has not been detected yet, if it exists.”
Adapting for adaptive mesh refinement
Previous studies using static lattice simulations, which maintain consistent resolution throughout, have yielded a broad range of axion mass: anywhere between 25 and just over 500 microelectronvolts (µeV). To narrow it down, the team worked with CCSE researchers to use adaptive mesh refinement (AMR) to focus their computational efforts on axion strings – topological defects in the particle soup of the early universe. AMR renders key areas of a simulation in the highest resolution while allowing less important parts to remain lower-res. In this case, focusing on potentially axion-dense areas conserved computational resources and yielded an increase in sensitivity of more than three orders of magnitude.
“Prior fixed-grid simulations could achieve only about 8,0003 sites, and they’re doing these big cosmological simulations, but they’re also trying to resolve these really thin strings, and 8,0003 grid points just don’t cut it,” said CCSE researcher Adam Peterson. AMR, though, “wants to simulate the big cosmological scales as well as the tiniest string scales, so it identifies ‘OK, this is a string, let’s resolve this,’ and I think they did 12 levels of refinement. Every level has a refinement ratio of two, and they achieved something like the equivalent of 65,0003 sites…which on a fixed scale is ridiculous. That’s impossible. You can’t do that.”
To apply AMR to their simulation, the team used AMReX, an open-source framework for performing block-structured AMR – a new application of the technique, which began with a bit of a learning curve.
“[AMReX] was still a new technology to apply to this problem, and so early on it did take a lot of iterating back and forth between the axion-mass search team and us to explain how the details work and how to tune AMReX to tightly focus resolution where they needed it, and then to follow those strings as they moved around, which is a challenging problem computationally, because you have to be able to refine your domain,” said CCSE researcher Don Willcox, who helped the team adopt AMReX. “That took a lot of interplay between us early on; we do our best with the documentation, but it’s still good to talk to [the team] about it and their needs.”
With time and teamwork, however, the collaboration was a success. The experiment showed a likely axion mass between 40 and 180 µeV – a vast improvement in precision from previous research and, incidentally, a range in which no axion experiments are currently searching.
Refining with GPUs
What comes next? First, the result will have to be replicated, and then ideally narrowed down further, with the help of Perlmutter, NERSC’s new flagship supercomputer, slated to be fully installed in 2022.
“We are preparing to run on Perlmutter; we've adapted all of our code to GPUs so that we can take advantage of its GPU capabilities,” said UC Berkeley researcher Ben Safdi, another author on the paper. “And we're planning to take full advantage of it when it comes fully online—to run not just one, but a suite of simulations with the goal being to narrow this precision down to the 10% level. I think with Perlmutter we should be able to make that happen.”
When that time comes, making the switch from Cori’s CPUs to Perlmutter’s GPUs won’t be a problem. A library within AMReX automatically coordinates with an underlying message-passing interface (MPI) library, allowing their code to run seamlessly on both – and paving the way for an even better understanding of what comprises dark matter.
“Using MPI requires some specialized knowledge of that library, but users of AMReX never have to interact with that library directly,” said Willcox. “Our data structures and functions already know how to interface with this parallel library, so you don’t have to do that yourself. The same is true for GPUs; if you want your code to run on GPUs, like on Perlmutter, all you really have to do is write your code in the general pattern we provide, and it will run on either a CPU system or a GPU system with no modifications.”
For more information about this study and its scientific results, see this UC Berkeley news release.
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.