NERSCPowering Scientific Discovery for 50 Years

Heavy-Ion Fusion Science (HIFS)

Warp12cell.gif

Warp3D simulation of 12-cell NDCX-II configuration showing a beam bunch exiting

 

Heavy ion fusion is the process by which intense, short bursts of high-powered ion beams are focused on a target containing thermonuclear fuel, causing fusion and producing a net gain of energy.  Instead of using enormous lasers (as in laser fusion) or magnets (as in ITER) to cause the fuel pellet to compress, the idea is to use a very high-current particle accelerator to produce an ion beam that compresses and ignites the fuel. The beams may be composed of ions such as xenon, mercury, or lead.

The LBNL Neutralized Drift Compression Experiment, NDCX-II, is an accelerator that was designed to study how to produce compact, intense, short-pulse ion beams for heavy-ion fusion.  The system was designed in part using many simulations run at NERSC.  Recently the NDCX-II experiments reached completion, helping to pave the way towards making inertial fusion energy an affordable and environmentally attractive means of producing commercial electricity.

The realization of heavy ion driven inertial fusion requires a detailed quantitative understanding of the behavior of high-current ion beams. Berkeley Lab researchers David Grote, Alex Friedman, and Jean-Luc Vay used the WARP3d code on the NERSC Franklin and Hopper systems to study the transport and acceleration of ion beams in NDCX-II.   WARP3d combines aspects of a multi-dimensional, multi-species, electrostatic and electromagnetic particle-in-cell (PIC) simulation code and an accelerator code.  Both capabilities are required to study the “space-charge” effects that dominate ion beams and arise from excess electric charge that is treated as a continuum of charge distributed over a region of space rather than distinct point-like charges.   WARP3d “ensemble” calculations – meaning hundreds of runs carried out in parallel - were used to assess NDCX-II performance in the presence of component imperfections and to set and design tolerances for various accelerator elements.  For example, WARP3d runs revealed the important effects of solenoid alignment errors.  A solenoid is a compact coil that operates at extremely high magnetic fields and is used to focus the ion beam in the horizontal and vertical plane.  So overall the NERSC simulations were vital in allowing NDCX researchers to arrive at a robust, optimal operating point for the beam.  Grote and Vay used up to 6,000 processors and over a million hours of machine time to do the work.

This work is part of the NERSC repository mp42, "Simulation of intense beams for heavy-ion-fusion science."  Alex Friedman (LBNL) is the PI.

Recently, NERSC staff have been researching ways to improve the performance of the WARP3d simulations.  A key ingredient in WARP3D its combined use of Fortran and the Python scripting language.  The Python interpreter allows interactive user-programmable code control and provides a more flexible means of inputting data than more traditional command-line arguments or namelist-style interfaces. However, Python when used in this way requires shared libraries.   Although NERSC's Cray XE6 Hopper system supports dynamic shared libraries through DVS projection of the shared root file system onto compute nodes, users discovered that performance of dynamic shared libraries is relatively poor at large scale, thereby limiting WARP3d performance.  In a paper published at the Cray User Group 2012 annual conference NERSC staff members Zhengji Zhao,  Katie Antypas, Yushu Yao, Rei Lee, and Tina Butler, along with Cray employee Mike Davis, reported on ways to improve shared library performance.  The research yielded important clues as to the source of large "startup" times for Python and suggested alternative ways of running to improve performance.  In particular, tests identified the Lustre "scratch" file system as the optimal file system for users to store shared object files (.so files) and Python modules.  The result of using the optimal file system was a nearly ten-fold improvement in WARP3d startup time when using 36,000 cores, the desired user concurrency level, on Hopper.