Previous NESAP Projects
NESAP for Simulations
- NWChemEx: computational chemistry
- ImSim: simulating images for the Large Synoptic Survey Telescope
- BerkelyGW: materials science
- Quantum Espresso: materials science and quantum chemistry
NESAP for Data
- TOAST: cosmic microwave background
- ATLAS: Large Hadron Collider detector
- CMS: Large Hadron Collider detector
- Resilient Scientific Workflows: analysis pipelines at the Joint Genome Institute
NESAP call for Perlmutter system
Simulations
-
EXAALT, Danny Perez (Los Alamos National Laboratory)
The purpose of Exascale Atomistic Capability for Accuracy, Length, and Time (EXAALT) is to develop an exascale-scalable molecular dynamics simulation platform that will allow the users to choose a point in accuracy, length and time-space that is most appropriate for the problem at hand, trading the cost of one over the other. EXAALT aims to develop a simulation tool to address key fusion and fission energy material challenges at the atomistic level, including 1) limited fuel burn up and 2) the degradation/instability of plasma facing components in the fusion reactor. As part of the fusion materials effort, the project uses classical models like Spectral Neighbor Analysis Method (SNAP), developed by Aidan Thompson from Sandia, via the LAMMPS software package. In the NESAP program, the goal for the EXAALT project is to optimize the SNAP module in LAMMPS for future generation architectures. LAMMPS and consequently the SNAP module in LAMMPS are implemented in C++ and the Kokkos framework is used to offload the computation on accelerators. The EXAALT team is currently working on improving the memory access patterns in some of the compute intensive routines in the SNAP module of LAMMPS. -
WDMApp, Choong-Seock Chang (PPPL), Fusion Energy Sciences / ECP
The Whole Device Model Application project (WDMApp) aims to build a tightly coupled core-edge simulation framework for magnetically confined fusion plasma studies. It uses XGC for the edge simulation and either GENE or GEM for the core simulation. These codes are based on identical gyrokinetic equations but the implementation methodologies are different. The goal of this project is to achieve a high throughput for the whole device, by coupling XGC with GENE or GEM. This project includes implementing efficient data exchange between two codes (XGC and GENE, or XGC and GEM), task parallelization, and heterogeneous memory management. It also includes GPU data structure management, inter/intra node load balance, MPI communication, and OpenMP/CUDA/OpenACC optimization. -
Lattice QCD, Carleton DeTar (Utah) / Balint Joo (JLAB), High Energy Physics / Nuclear Physics / ECP
The Chroma code will be used to carry out first principle calculations of the resonance spectrum of hadrons within the strong nuclear force, and search for the existence of exotic mesons -- a focus of the new GlueX experiment at Jefferson Lab. These calculations will be carried out with physically relevant parameters in a lattice QCD calculation. Chroma also is fully integrated with QUDA and as such is well positioned for Perlmutter. In addition, the Chroma code uses PRIMME, a high-performance library developed at William & Mary for computing a few eigenvalues/eigenvectors and singular values/vectors. PRIMME does not have a GPU implementation at this time and hence this is significant effort for this project. The Chroma team is also interested in performance portable solutions and will be investigating techniques using data parallel C++ for heterogeneous processors, such as Kokkos and SYCL.
The Lattice QCD Project is unique within NESAP in that it is made up of several code teams that are using common frameworks and libraries to achieve high performance on GPUs. The primary teams are the MILC code led by Carleton Detar and the Chroma code led by Balin Joo.
The MILC team, and its collaborators, have begun a multiyear project to calculate to high precision in lattice QCD a complete set of decay form factors for tree-level and rare decays of the B and D mesons. In addition, they are working on the accurate prediction of the size of direct violation of charge-conjugation parity (CP) symmetry in the decay of a K meson into two pi mesons is an important challenge for Lattice QCD with the potential to discover new phenomena which are responsible for the presently unexplained preponderance of matter over antimatter in the Universe. The MILC code is well positioned for Perlmutter as it is already integrated with the NVIDIA developed QUDA library, a library for performing calculations in lattice QCD using GPUs and the Cuda development platform. However, as part of the NESAP effort the MILC team is working to integrate in Grid, a data parallel C++ mathematical object library being developed by Peter Boyle at the University of Edinburgh. Grid has the potential to provide performance portability across a wide range of CPU and GPU based architectures. -
ASGarD, David Green (ORNL), Fusion Energy Sciences / Advanced Scientific Computing Research
The Adaptive Sparse Grid Discretization (ASGarD) code is a developing high-dimensional, high-order, Discontinuous-Galerkin finite element solver based on adaptive sparse-grid methods to enable the grid-based (Eulerian / continuum) solution of PDEs at previously unachievable dimensionality and resolution. Our specific target science domain application is that of ”noise-free fully-kinetic” (6D + time) simulation for magnetic fusion energy, something that has previously been out of reach for problems of useful size due to the extreme number of degrees-of-freedom required for grid-based methods. -
Chombo-Crunch, David Trebotich (LBNL), Basic Energy Sciences / ECP
The ECP Subsurface applications development project is developing a multi- scale, multi-physics capability for modeling flow, chemistry and mechanics pro- cesses in subsurface fractures based on the applications code Chombo-Crunch. Chombo-Crunch is a high resolution pore scale subsurface simulator based on the Chombo software libraries supporting structured grid, adaptive, finite volume methods for numerical PDEs. -
NAMD, Emad Tajkhorshid (UIUC), Biological and Environmental Research, Basic Energy Sciences
-
NWChemEx, Hubertus van Dam (BNL), Biological and Environmental Research / Basic Energy Sciences / ECP
The NWChemEx Project will redesign NWChem and re-implement a selected set of physical models for pre-exascale and exascale computing systems. This will provide the computational chemistry community with a software infrastructure that is scalable, flexible, and portable and will support a broad range of chemistry research on a broad range of computing systems. To guide this effort, the project is focused on two inter-related targeted science challenges relevant to the development of advanced biofuels: modeling the molecular processes underpinning the development of biomass feed stocks that can be grown on marginal lands and new catalysts for the efficient conversion of biomass-derived intermediates into biofuels and other bioproducts. Solution of these problems will enhance U.S. energy security by diversifying the energy supply chain. This project includes porting Tensor Algebra for Many-body Methods (TAMM) library to GPUs. It also includes developing mixed-precision methods in Hartree-Fock/DFT, coupled cluster, domain local pair natural orbital implementations of coupled cluster, and explicitly correlated localized coupled cluster algorithms. -
ImSim, Josh Meyers (LLNL), High Energy Physics
ImSim aims to simulate images from the Large Synoptic Survey Telescope (LSST), a large-aperture wide-field optical telescope which will repeatedly observe a substantial fraction of the sky every night over a 10 year survey. Among other goals, LSST will enable cosmologists to probe the content and history of the accelerating universe. High fidelity image simulations are used to exercise, characterize, and guide improvements to LSST science pipelines under known conditions before being applied to real LSST data. ImSim combines simulated catalogs, observing strategies, and site conditions produced upstream to generate, propagate, and collect individual photons that ultimately form an image. This project includes porting scientific raytracing code to the GPU, profiling ImSim, identifying and implementing code changes, and porting existing CPU-level OpenMP code to the GPU. - WEST, Marco Govoni (ANL), Basic Energy Sciences
WEST is a massively-parallel many-body perturbation theory code for large scale materials science simulations with focus on materials for energy, water, and quantum information. WEST calculates GW and electron-phonon self-energies, and solves the Bethe-Salpeter Equation starting from semilocal and hybrid density functional theory calculations.
The code does not require the explicit evaluation of dielectric matrices nor of virtual electronic states, and can be easily applied to large systems. Localized orbitals obtained from Bloch states using bisection techniques are used to reduce the complexity of the calculation and enable the efficient use of hybrid functionals. The major computational steps in WEST are the calculation of eigenstates of the static polarizability matrix as well as calculation of the frequency dependent polarizability matrix. Both steps involve solving large linear systems. - BerkeleyGW, Mauro Del Ben (LBNL), Basic Energy Sciences
BerkeleyGW is a massively parallel computational package in Material Science that simulates electron excited state properties for a variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. It is based on many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology, and can be used in conjunction with many density-functional theory codes for ground state properties such as PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The code is written in Fortran and has about 100,000 lines. Based on functionality, it can be dissected into four modules, Epsilon, Sigma, Kernel, and Absorption. These modules scale very well on CPUs and the team has started porting Epsilon and Sigma to GPUs using CUDA and OpenACC. This project involves porting Epsilon, Sigma, Kernel and Absorption using CUDA, OpenACC and OpenMP. It also involves the optimization of Epsilon, Sigma, Kernel and Absorption through calling libraries, overlapping compute and communication, reducing device-host data transfer, and utilizing data streams. - Quantum ESPRESSO, Annabella Selloni (Princeton), Robert DiStasio (Cornell) and Roberto Car (Princeton), Basic Energy Sciences
Quantum ESPRESSO is an open source density functional theory (DFT) code and widely used in Materials Science and Quantum Chemistry to compute properties of material systems, such as atomic-structures, total-energies, vibrational properties etc. Accurate calculations of important electronic properties, like band-gaps and excitations energies are achievable for many systems through so called Hybrid Functional calculations employing a certain contribution for the exact exchange potential - which represents the contribution of the Pauli-Exclusion principle. The Car-Parrinello (CP) extension, which is the focus of this effort, can additionally incorporate effects from variable-cell dynamics and free-energy surface calculation at fixed cell through meta-dynamics.
- Our expectation on the potential NESAP postdoc project is on the algorithmic development and optimization specific to NERSC-9 architecture. For instance, one could improve our pilot GPU implementation in Cuda Fortran, specially with the asynchronous overlap of CPU and GPU related subroutines and with the efficient use of multi-GPU programing (Note: we hope to get help from NESAP postdoc with GPU based performance portability strategy, e.g., SIMT improvements.
- In addition to boosting the computation using GPU, we are also interested in dealing with other existing performance barriers such as communication and workload imbalance that we have analyzed using our CPU based implemen- tation. In this part, we already have a solid plan which involves speeding up communication with sparse domain in real space and asynchronous overlap of computation/communication. For the workload imbalance, we are developing a dynamic graph-theory based scheduler.
- E3SM, Noel Keen (LBNL) / Mark Taylor (SNL), Biological and Environmental Research / ECP
- MFDn, Pieter Maris (Iowa State), Nuclear Physics
- WarpX / AMReX, Jean-Luc Vay / Ann Almgren (LBNL), High Energy Physics / ECP
The long-term goal of the WarpX project is to develop simulation tools based on Particle-In-Cell technologies that will be capable of modeling chains of tens-to-thousands of plasma-based particle accelerators for high-energy collider designs by 2030-2040. The current state-of-the-art enables the modeling of one stage in 3-D at a resolution that is insufficient for electron beam quality that are envisioned for future colliders. Reaching the ultimate goal necessitates the use of the most advanced and largest supercomputers available, combined with the use of the most advanced algorithms, including adaptive mesh refinement, for additional boosts in performance.
Data Analytics
- Tomographic Reconstruction in Python (TomoPy), Doga Gursoy (Argonne National Laboratory),Basic Energy Sciences
- Time Ordered Astrophysics Scalable Tools (TOAST), Julian Borrill (Lawrence Berkeley Laboratory),High Energy Physics
TOAST is a generic, modular, massively parallel, hybrid Python/C++ software framework for simulating and processing time-stream data collected by telescopes. It was originally developed to support the Planck satellite mission to map anisotropies in the cosmic microwave background (CMB). Planck observed the microwave and infrared sky over four years using 72 detectors. By contrast, the upcoming CMB-S4 experiment will scan the sky over five years starting in the mid-2020s, using a suite of geographically-distributed ground-based telescopes using over 500,000 detectors. To prepare for CMB-S4 and support other CMB experiments, TOAST simulates these observations at scale on supercomputers, including the effects of the atmosphere and weather at various sites or in space, and is then used to analyze the simulated observations. This process is used to understand critical sources of systematic uncertainties and inform the observing strategy to optimize the science return. TOAST scales to the full Cori KNL partition already. A postdoc working closely with the TOAST team would work on porting the application to GPUs. - Dark Energy Spectroscopic Instrument Codes (DESI), Stephen Bailey (Lawrence Berkeley Laboratory),High Energy Physics
The DESI experiment will image approximately 14,000 square degrees of the night sky to create the most detailed 3D map of the universe to date. In fall 2019, DESI will began batches of images nightly to NERSC and will continue for the next five years. The images are processed by the DESI spectroscopic pipeline to convert raw images into spectra; from those spectra, the redshifts of quasars and galaxies will be extracted, which will be used to determine their distance. The pipeline is almost entirely in Python and relies heavily on libraries like NumPy and SciPy. Work is in progress to convert the DESI pipeline into a GPU version. - ATLAS Data Processing (ATLAS),Paolo Calafiura (Lawrence Berkeley Laboratory),High Energy Physics
Over ~40% of CPU cycles for the ATLAS experiment at the LHC are currently dedicated to expensive full Geant4 simulation. The fast simulation tool ‘FastCaloSim’ interfaces with the standard ATLAS software and runs at least one order of magnitude faster than full GEANT4 simulation while also taking into account detailed physics. This project aims to develop FastCaloSim into the first ATLAS GPU-accelerated application to run in production. This would enable ATLAS to offload up to 50% of the simulation CPU cycles used worldwide to GPU accelerators. It would also give ATLAS valuable production experience on heterogeneous systems, tackling multi-node and multi-GPU load balancing, portability and I/O issues. - CMS Data Processing (CMS), Dirk Hufnagel (Fermi National Accelerator Laboratory),High Energy Physics
- NextGen Software Libraries for LZ, Maria Elena Monzani (SLAC),High Energy Physics
LZ (LUX-Zeplin) is a next generation direct dark matter experiment. When completed, the experiment will be the world’s most sensitive experiment for dark matter particles known as WIMPs (Weakly Interacting Massive Particles) over a large range of WIMP masses. LZ utilizes 7 tonnes of active liquid xenon to search for xenon nuclei that recoil in response to collisions caused by an impinging flux WIMPs. Installation is going on now and the experiment is expected to start data collection in 2020. LZ requires large amounts of simulation to fully characterize the backgrounds in the detector. Their simulation chain uses standard CPU-based HEP software frameworks (Gaudi, ROOT, Geant, etc.) and they are looking to utilize GPUs to speed up ray-tracing of optical photons and reconstruction of complex event topologies. - Resilient Scientific Workflows, Kjiersten Fagnan (JGI),Biological and Environmental Research
The DOE Joint Genome Institute at LBNL has a mission to advance genomics in support of DOE missions related to clean energy generation and environmental characterization. We have an opening for a postdoctoral fellow to work with the JGI to optimize cross-facility data analysis pipelines for JGI teams and users. The project will analyze data collected by instrumented analysis pipelines, performance data from the computing resources used by the JGI, and timing information from data transfers, and will use this data to design and optimize resilient workflows. - Data Analytics at the Exascale for Free Electron Lasers (ExaFEL), Amedeo Perazzo (SLAC), Nicholas Sauter (LBNL), Christine Sweeney (LANL), Basic Energy Sciences / ECP
Detector data rates at light sources are advancing exponentially: the Linac Coherent Light Source (LCLS), a X-ray Free Electron Laser, will increase its data throughput by three orders of magnitude by 2025. XFELs are designed to study material on molecular length scales at ultra-fast time scales. Users of an XFEL require an integrated combination of data processing and scientific interpretation, where both aspects demand intensive computational analysis. ExaFEL will both provide critical capabilities to LCLS and serve as a model for other computational pipelines across the DOE landscape. The NESAP ExaFEL project targets to port two code bases to GPU:
1) Multi-Tiered Iterative Phasing (MTIP)for Fluctuation X-ray Scattering. Here, the experiment injects small droplets of multiple molecules into the FEL beam. From the scattering data, the algorithm determines the underlying structure of the molecule(s). MTIP-FXS is written in C++.
2) The Computational Crystallography Toolbox (CCTBX) developed as an open source of a larger code base to advance automation of macromolecular structure determination. Here, the experiment drops micron-sized crystals into the FEL beam. The CCTBX is a hybrid Python/C++ (Boost.Python) framework.
Learning
- iNAIADS: ML for hydrology extremes under climate change, Charuleka Varadharajan (LBNL), Biological & Environmental Sciences
As climate change progresses, there is an increasing need to understand and predict how water quality in streams and rivers will respond to climate disturbances such as heatwaves, floods, and droughts. This project is investigating the applicability of cutting-edge AI/ML methods including graph-based networks, (meta) transfer learning, and causal inference methods for analysis and predictions based on CONUS-scale multivariate observational datasets. - DESLearning: Cosmology with Machine Learning from Dark Energy Survey, Tomasz Kacprzak (ETHZ), Agnès Ferté (SLAC), High Energy Physics
The Dark Energy Survey (DES) is a galaxy survey designed to observe the large-scale structure of the universe using a number of cosmological probes. Recently, deep learning methods have been developed which yield highly precise parameter measurements, but these have only been deployed at moderate resolution on DES simulation data. The main challenge in this project is to create a parallel HPC framework on Perlmutter that will train the ML system for multi-probe cosmological parameter analysis with DES-Y6 at high resolution (nside=1024) in less than 24 hours. This will involve model exploration and development, performance benchmarking, and evaluating parallelization strategies. - Exabiome GNN, Kathy Yelick (UCB/LBNL), Biological & Environmental Sciences
Metagenomic datasets provide rich sources of information about the microorganisms in an environment and the complex relationships that exist within these communities. To tackle this complex data, the Exabiome project has developed the MetaHipMer software pipeline that can process very large metagenome datasets by harnessing the power of modern exascale architectures. Currently, the scaffolding stage of MetaHipMer uses a breadth-first search to identify erroneous edges in the contig grap, so this stage of the algorithm in its current form is not amenable to GPU architectures due to limited parallelism. This project aims to replace the breadth-first search with a machine learning based decision function, simplifying the code, improving accuracy, and making the algorithm more amenable to GPU acceleration. - Machine learning accurate electronic structure properties for first principles electrochemistry, Christopher Sutton (USC), Basic Energy Sciences
Electrochemical reactions are key to decarbonizing the world economy, but progress in understanding electrochemical systems is often hindered by the diversity of conditions under which experiments are run and calculations are performed. A robust and efficient computational simulation framework is of prime importance for this domain. The proposed project aims to leverage graph neural networks to tackle these challenges by accelerating the development of accurate and efficient exascale-ready solvated beyond-DFT methods, with the goal of developing models which can leverage up to millions of samples and predict to predict the entire electronic band structure based on eigenvalues at specific k-points and energy. - FourCastNEXT – FourCastNet-EXTremes, Bill Collins (LBNL), Mike Pritchard (UCI/NVIDIA), Biological & Environmental Sciences
Characterizing the nature of extreme events (frequency, intensity, duration, spatial extent) in a warming world is a scientific grand challenge for predicting the impacts of climate change. Studying low-likelihood high-impact extreme weather and climate events in a warming world requires massive ensembles to capture long tails of multivariate distributions. In combination, it is simply impossible to generate massive ensembles, of say 1000 members, using traditional numerical simulations of climate models at high resolution. This application will use machine learning (ML) to replace traditional numerical simulations for predicting extreme events, where ML has proven to be successful in terms of accuracy and fidelity, at five orders-of-magnitude lower computational cost than numerical methods. - Scaling BonDNet for CHiPPS Reaction Network Active Exploration, Samuel Blau (LBNL), Basic Energy Sciences
The Center for High Precision Patterning Science (CHiPPS) aims to develop a fundamental understanding and control of patterning materials and processes for energy-efficient, large-area patterning with atomic precision. The CHiPPS scientific strategy involves the use of chemical reaction networks (CRNs) to understand reactivity triggered by extreme-ultraviolet (EUV) exposure of thin-film photoresists. The BonDNet graph neural network (GNN) is proposed to reduce the cost of simulating all participating molecules with DFT, allowing for the building of larger reaction networks, and has been demonstrated on datasets with up to 100k samples. This project aims to efficiently scale up BonDNet training to datasets with over 10 million samples, yielding sufficiently generalized and low error models to be viable in EUVL CRN active exploration. - Laser-plasma surrogate modeling for coupled AI+simulation, Blagoje Djordjevic (LLNL), Fusion Energy Sciences
This project aims to use modern machine learning methods to develop multi-fidelity surrogate models to tackle the challenge of including subscale physics in high-level models of laboratory-scale laser-plasma experiments to imitate astrophysical phenomena. The ultimate goal is to properly account for laser-plasma interactions, particle collisions, and ionization effects in order to better understand how high-energy-density systems evolve in the lab or within stars. The work will be coupled to ongoing research studying spectroscopic signatures in laser-driven, warm-dense matter experiments with significant implications in basic astrophysical science and stellar opacities. - Extreme Scale Spatio-Temporal Learning (LSTNet), Shinjae Yoo (BNL), Advanced Scientific Computing Research
The majority of DOE Exascale simulation and experimental applications are spatio-temporal learning challenges, and scaling spatio-temporal learning algorithms on upcoming heterogenous exascale computers is critical to enabling scientific breakthroughs and advances in industrial applications. This project's proposed spatio-temporal data modeling and analysis methods enable analysis of large time series datasets easily and efficiently. The proposed deep learning scaling work is applicable to many time-series data analysis tasks and can be used by the broader data science community as well. The team is working on designing novel spatio-temporal learning algorithms and developing novel distributed optimization algorithms that scale to various exascale architectures. - Accelerating High Energy Physics reach with Machine Learning, Benjamin Nachmann (LBNL) / Jean-Roch Vlimant (Caltech), High Energy Physics
This project is aimed at developing and deploying new methodologies in high energy physics enhanced by machine learning, including the development of fast surrogate models for detector simulation, anomaly detection of new physics processes, and correcting physics measurements by the detector effects. - Deep Learning Thermochemistry for Catalyst Composition Discover and Optimization, Zachary Ulissi (CMU), Basic Energy Sciences. The CatalysisDL project is focused on developing, tuning, and scaling deep learning models to aid in the selection and optimization of catalysis materials. Methods employed include graph neural network models such as DimeNet++ and GemNet. This work is done as part of the Open Catalyst Project.
NESAP partnership with NERSC, Cray, and Intel for the Cori System
Advanced Scientific Computing Research (ASCR):
- Optimization of the BoxLib Adaptive Mesh Refinement Framework for Scientific Application Codes PI: Ann Almgren (Lawrence Berkeley National Laboratory); Postdoc assigned
- High-Resolution CFD and Transport in Complex Geometries Using Chombo-Crunch David Trebotich (Lawrence Berkeley National Laboratory); Postdoc assigned
Biological and Environmental Research (BER)
- CESM Global Climate Modeling, John Dennis (National Center for Atmospheric Research)
- High-Resolution Global Coupled Climate Simulation Using The Accelerated Climate Model for Energy (ACME), Hans Johansen (Lawrence Berkeley National Laboratory)
- Multi-Scale Ocean Simulation for Studying Global to Regional Climate Change, Todd Ringler (Los Alamos National Laboratory)
- Gromacs Molecular Dynamics (MD) Simulation for Bioenergy and Environmental Biosciences, Jeremy C. Smith (Oak Ridge National Laboratory)
- Meraculous, a Production de novo Genome Assembler for Energy-Related Genomics Problems, Katherine Yelick (Lawrence Berkeley National Laboratory)
Basic Energy Science (BES)
- Large-Scale Molecular Simulations with NWChem, PI: Eric Jon Bylaska (Pacific Northwest National Laboratory)
- Parsec: A Scalable Computational Tool for Discovery and Design of Excited State Phenomena in Energy Materials, James Chelikowsky (University of Texas, Austin)
- BerkeleyGW: Massively Parallel Quasiparticle and Optical Properties Computation for Materials and Nanostructures (Jack Deslippe, NERSC)
- Materials Science using Quantum Espresso, Paul Kent (Oak Ridge National Laboratory); Postdoc assigned
- Large-Scale 3-D Geophysical Inverse Modeling of the Earth, Greg Newman (Lawrence Berkeley National Laboratory)
- Fusion Energy Sciences (FES)
- Understanding Fusion Edge Physics Using the Global Gyrokinetic XGC1 Code, Choong-Seock Chang (Princeton Plasma Physics Laboratory)
- Addressing Non-Ideal Fusion Plasma Magnetohydrodynamics Using M3D-C1, Stephen Jardin (Princeton Plasma Physics Laboratory)
- High Energy Physics (HEP)
- HACC (Hardware/Hybrid Accelerated Cosmology Code) for Extreme Scale Cosmology, Salman Habib (Argonne National Laboratory)
- The MILC Code Suite for Quantum Chromodynamics (QCD) Simulation and Analysis, Doug Toussaint (University of Arizona)
- Advanced Modeling of Particle Accelerators, Jean-Luc Vay, Lawrence Berkeley National Laboratory)
- Nuclear Physics (NP)
- Domain Wall Fermions and Highly Improved Staggered Quarks for Lattice QCD, Norman Christ (Columbia University) and Frithjof Karsch (Brookhaven National Laboratory)
- Chroma Lattice QCD Code Suite, Balint Joo (Jefferson National Accelerator Facility)
- Weakly Bound and Resonant States in Light Isotope Chains Using MFDn -- Many Fermion Dynamics Nuclear Physics, James Vary and Pieter Maris (Iowa State University)
Additional NESAP Application Teams with access to NERSC training and early hardware
- GTC-P (William Tang/PPPL)
- GTS (Stephane Ethier/PPPL)
- VORPAL (John Cary/TechX)
- TOAST (Julian Borrill/LBNL)
- Qbox/Qb@ll (Yosuke Kanai/U. North Carolina)
- CALCLENS and ROCKSTAR (Risa Wechsler/Stanford)
- WEST (Marco Govoni/U. Chicago)
- QLUA (William Detmold/MIT)
- P3D (James Drake/U. Maryland)
- WRF (John Michalakes/ANL)
- PHOSIM (Andrew Connolly/U. Washington)
- SDAV tools (Hank Childs/U. Oregon)
- M3D/M3D-K (Linda Sugiyama/MIT)
- DGDFT (Lin Lin/U.C. Berkeley)
- GIZMO/GADGET (Joel Primack/U.C. Santa Cruz)
- ZELMANI (Christian Ott/Caltech)
- VASP (Martijn Marsman/U. Vienna)
- NAMD (James Phillips/U. Illinois)
- PHOENIX-3D (Eddie Baron/U. Oklahoma)
- ACE3P (Cho-Kuen Ng/SLAC)
- S3D (Jacqueline Chen/SNL)
- ATLAS (Paolo Calafiura/LBNL)
- BBTools genomics tools (Jon Rood/LBNL, JGI)
- DOE MiniApps (Alice Koniges, LBNL)
- HipGISAXs (Heximer, LBNL)
- GENE (Jenko, UCLA)