CosmoGAN: Training a Neural Network to Study Dark Matter
May 14, 2019
Contact: Kathy Kincade, kkincade@lbl.gov, +1 510 495 2124
As cosmologists and astrophysicists delve deeper into the darkest recesses of the universe, their need for increasingly powerful observational and computational tools has expanded exponentially. From facilities such as the Dark Energy Spectroscopic Instrument to supercomputers like Lawrence Berkeley National Laboratory’s Cori system at the National Energy Research Scientific Computing (NERSC) facility, they are on a quest to collect, simulate, and analyze increasing amounts of data that can help explain the nature of things we can’t see, as well as those we can.
Toward this end, gravitational lensing is one of the most promising tools scientists have to extract this information by giving them the ability to probe both the geometry of the universe and the growth of cosmic structure. Gravitational lensing distorts images of distant galaxies in a way that is determined by the amount of matter in the line of sight in a certain direction, and it provides a way of looking at a two-dimensional map of dark matter, according to Deborah Bard, group lead for the Data Science Engagement Group in Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC).
“Gravitational lensing is one of the best ways we have to study dark matter, which is important because it tells us a lot about the structure of the universe,” she said. “The majority of matter in the universe is dark matter, which we can’t see directly, so we have to use indirect methods to study how it is distributed.”
But as experimental and theoretical datasets grow, along with the simulations needed to image and analyze this data, a new challenge has emerged: these simulations are increasingly – even prohibitively – computationally expensive. So computational cosmologists often resort to computationally cheaper surrogate models, which emulate expensive simulations. More recently, however, “advances in deep generative models based on neural networks opened the possibility of constructing more robust and less hand-engineered surrogate models for many types of simulators, including those in cosmology,” said Mustafa Mustafa, a machine learning engineer at NERSC and lead author on a new study that describes one such approach developed by a collaboration involving Berkeley Lab, Google Research, and the University of KwaZulu-Natal.
A variety of deep generative models are being investigated for science applications, but the Berkeley Lab-led team is taking a unique tack: generative adversarial networks (GANs). In a paper published May 6, 2019, in Computational Astrophysics and Cosmology, they discuss their new deep learning network, dubbed CosmoGAN, and its ability to create high-fidelity, weak gravitational lensing convergence maps.
“A convergence map is effectively a 2D map of the gravitational lensing that we see in the sky along the line of sight,” said Bard, a co-author on the Computational Astrophysics and Cosmology paper. “If you have a peak in a convergence map that corresponds to a peak in a large amount of matter along the line of sight, that means there is a huge amount of dark matter in that direction.”
The Advantages of GANs
Why opt for GANs instead of other types of generative models? Performance and precision, according to Mustafa.
“From a deep learning perspective, there are other ways to learn how to generate convergence maps from images, but when we started this project GANs seemed to produce very high-resolution images compared to competing methods, while still being computationally and neural network size efficient,” he said.
“We were looking for two things: to be accurate and to be fast,” added co-author Zaria Lukic, a research scientist in the Computational Cosmology Center at Berkeley Lab. “GANs offer hope of being nearly as accurate compared to full physics simulations.”
The research team is particularly interested in constructing a surrogate model that would reduce the computational cost of running these simulations. In the Computational Astrophysics and Cosmology paper, they outline a number of advantages of GANs in the study of large physics simulations.
“GANs are known to be very unstable during training, especially when you reach the very end of the training and the images start to look nice – that’s when the updates to the network can be really chaotic,” Mustafa said. “But because we have the summary statistics that we use in cosmology, we were able to evaluate the GANs at every step of the training, which helped us determine the generator we thought was the best. This procedure is not usually used in training GANs.”
Using the CosmoGAN generator network, the team has been able to produce convergence maps that are described by - with high statistical confidence - the same summary statistics as the fully simulated maps. This very high level of agreement between convergence maps that are statistically indistinguishable from maps produced by physics-based generative models offers an important step toward building emulators out of deep neural networks.
“The huge advantage here was that the problem we were tackling was a physics problem that had associated metrics,” Bard said. “But with our approach, there are actual metrics that allow you to quantify how accurate your GAN is. To me that is what is really exciting about this – how these kinds of physics problems can influence machine learning methods.
Ultimately such approaches could transform science that currently relies on detailed physics simulations that require billions of compute hours and occupy petabytes of disk space - but there is considerable work still to be done. Cosmology data (and scientific data in general) can require very high-resolution measurements, such as full-sky telescope images.
“The 2D images considered for this project are valuable, but the actual physics simulations are 3D and can be time-varying and irregular, producing a rich, web-like structure of features,” said Wahid Bhmiji, a big data architect in the Data and Analytics Services group at NERSC and a co-author on the Computational Astrophysics and Cosmology paper. “In addition, the approach needs to be extended to explore new virtual universes rather than ones that have already been simulated - ultimately building a controllable CosmoGAN.”
“The idea of doing controllable GANs is essentially the Holy Grail of the whole problem that we are working on: to be able to truly emulate the physical simulators we need to build surrogate models based on controllable GANs,” Mustafa added. “Right now we are trying to understand how to stabilize the training dynamics, given all the advances in the field that have happened in the last couple of years. Stabilizing the training is extremely important to actually be able to do what we want to do next.”
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.