Brian Wirth
1.1. Project Information - Modeling plasma - surface interactions
Document Prepared By | Brian Wirth |
---|---|
Project Title | Modeling plasma - surface interactions |
Principal Investigator | Brian Wirth |
Participating Organizations | University of California, Berkeley; University of Tennessee; Oak Ridge National Laboratory |
Funding Agencies | DOE SC DOE NSA NSF NOAA NIH Other: |
2. Project Summary & Scientific Objectives for the Next 5 Years
Please give a brief description of your project - highlighting its computational aspect - and outline its scientific objectives for the next 3-5 years. Please list one or two specific goals you hope to reach in 5 years.
It is acknowledged that plasma-material interactions pose an immense scientific challenge and are one of the most critical issues in magnetic confinement fusion research. The demands on plasma-facing materials in a steady-state fusion device include extreme particle and thermal fluxes. These energetic fluxes have pronounced impacts on the topology and chemistry of the near-surface region of the material, which influence the plasma sheath potentials and subsequent threat spectra. These evolutions are also inherently multiscale in time and are likely controlled by diffusional phenomena that are influenced by the high heat loads and subsequent thermal (and stress) gradients into the material, as well as by defect micro/nanostructures induced by both the ion and neutron particle irradiation. This complexity is further underscored by the fact that the plasma and materials surface are strongly coupled to each other, mediated by an electrostatic and magnetic sheath, despite the vastly different physical scales for surface (~ nm) versus plasma (~ mm) processes.
For example, the high probability (> 90%) of prompt local ionization and re-deposition for sputtered material atoms means that surface material in contact with the plasma is itself a plasma-deposited surface, not the original ordered material. Likewise, the recycling of hydrogenic plasma fuel is self-regulated through recycling processes involving the near-surface fuel transport in the material and the ionization sink action of the plasma. The intense radiation environment (ions, neutrons, photons) ensures that the material properties are modified and dynamically coupled to the plasma materials surface interaction processes. Some of the most critical plasma materials interaction issues include: i) the net erosion of plasma-facing surfaces; ii) net tritium fuel retention in surfaces; iii) H isotope and material mixing in the wall; and iv) the minimization of core plasma impurities. Furthermore, the plasma-material surface boundary plays a central role in determining the fusion performance of the core plasma. However, while it is widely accepted that the plasma-surface interface sets a critical boundary condition for the fusion plasma, predictive capabilities for PSI remain highly inadequate.
Gaining understanding and predictive capabilities in this critical area will require addressing simultaneously complex and diverse physics occurring over a wide range of lengths (angstroms to meters) and times (femtoseconds to days and beyond to operating lifetimes). The lower time and length scales correspond to individual ion implantation and sputtering, which occurs at or near the material surface, in addition to a range of ionization and recombination processes of the sputtered neutrals and ions in the near surface sheath. At intermediate length and time scales, a wealth of physical processes are initiated, including diffusion of the now implanted ionic/neutral species, the possibility of chemical sputtering processes at the surface, the formation of gas bubbles, surface diffusion driving surface topology changes and phonon scattering by radiation defects that reduces the thermal conductivity of the material. At longer length and time scales, additional phenomena such as long-range material transport in the plasma, re-deposition of initially sputtered surface atoms, amorphous film growth and hydrogenic species diffusion into the bulk material and permeation become important. This broad palette of physical phenomena will require development not only of detailed physics models and computational strategies at each of these scales, but algorithms and methods to strongly couple them in a way that can be robustly validated. While present research is confined to each of these scales, or pioneering ways to couple two or more of them, the current approaches already push the state-of-the-art in technique and available computational power. Therefore, simulations spanning multiple scales needed for ITER, DEMO, etc., will require extreme-scale computing platforms and integrated physics and computer science advances.
3. Current HPC Usage and Methods
3a. Please list your current primary codes and their main mathematical methods and/or algorithms. Include quantities that characterize the size or scale of your simulations or numerical experiments; e.g., size of grid, number of particles, basis sets, etc. Also indicate how parallelism is expressed (e.g., MPI, OpenMP, MPI/OpenMP hybrid)
Molecular Dynamics (LAMMPS) codes - parallelism by MPI - massively parallel for solving Newtonian dynamics for known, short-range interatomic potentials - easily handles 100's of millions of atoms, velocity Verlet/leap-frog or predictor corrector time integration. Limited by femtosecond time steps to hundreds of nanoseconds of simulation time
Kinetic Monte Carlo codes - mostly serial, but parallelized through replicas - a variety of codes used, none is standard
ParaSpace - parallel cluster dynamics with spatial dependence, which involves a parallel, large sparse-matrix linear solver (PARDISO) - parallelism by OpenMP - backward difference time integration - easily treats 1E7 degrees of freedom - will need to extend to >1E12
VASP & Ab-init electronic structure codes both of which are commonly used at NERSC
3b. Please list known limitations, obstacles, and/or bottlenecks that currently limit your ability to perform simulations you would like to run. Is there anything specific to NERSC?
Need to treat much larger degrees of freedom in parallel, large sparse matrices
3c. Please fill out the following table to the best of your ability. This table provides baseline data to help extrapolate to requirements for future years. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions.
Facilities Used or Using | NERSC OLCF ACLF NSF Centers Other: |
---|---|
Architectures Used | Cray XT IBM Power BlueGene Linux Cluster Other: |
Total Computational Hours Used per Year | Core-Hours |
NERSC Hours Used in 2009 | 0 Core-Hours |
Number of Cores Used in Typical Production Run | |
Wallclock Hours of Single Typical Production Run | |
Total Memory Used per Run | GB |
Minimum Memory Required per Core | GB |
Total Data Read & Written per Run | GB |
Size of Checkpoint File(s) | GB |
Amount of Data Moved In/Out of NERSC | GB per |
On-Line File Storage Required (For I/O from a Running Job) | TB and Files |
Off-Line Archival Storage Required | TB and Files |
Please list any required or important software, services, or infrastructure (beyond supercomputing and standard storage infrastructure) provided by HPC centers or system vendors.
4. HPC Requirements in 5 Years
4a. We are formulating the requirements for NERSC that will enable you to meet the goals you outlined in Section 2 above. Please fill out the following table to the best of your ability. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions at the workshop.
Computational Hours Required per Year | |
---|---|
Anticipated Number of Cores to be Used in a Typical Production Run | 512 |
Anticipated Wallclock to be Used in a Typical Production Run Using the Number of Cores Given Above | 100 |
Anticipated Total Memory Used per Run | GB |
Anticipated Minimum Memory Required per Core | GB |
Anticipated total data read & written per run | GB |
Anticipated size of checkpoint file(s) | GB |
Anticipated Amount of Data Moved In/Out of NERSC | GB per |
Anticipated On-Line File Storage Required (For I/O from a Running Job) | TB and Files |
Anticipated Off-Line Archival Storage Required | TB and Files |
4b. What changes to codes, mathematical methods and/or algorithms do you anticipate will be needed to achieve this project's scientific objectives over the next 5 years.
4c. Please list any known or anticipated architectural requirements (e.g., 2 GB memory/core, interconnect latency < 1 μs).
4d. Please list any new software, services, or infrastructure support you will need over the next 5 years.
4e. It is believed that the dominant HPC architecture in the next 3-5 years will incorporate processing elements composed of 10s-1,000s of individual cores, perhaps GPUs or other accelerators. It is unlikely that a programming model based solely on MPI will be effective, or even supported, on these machines. Do you have a strategy for computing in such an environment? If so, please briefly describe it.
New Science With New Resources
To help us get a better understanding of the quantitative requirements we've asked for above, please tell us: What significant scientific progress could you achieve over the next 5 years with access to 50X the HPC resources you currently have access to at NERSC? What would be the benefits to your research field if you were given access to these kinds of resources?
Please explain what aspects of "expanded HPC resources" are important for your project (e.g., more CPU hours, more memory, more storage, more throughput for small jobs, ability to handle very large jobs).
The observations of novel structural response of tungsten surfaces to mixed helium and hydrogen plasmas demonstrate the complexity involved, and the difficult in extrapolating laboratory based experiments to fusion device performance. We anticipate that expanded computational resources will enable use to address a number of key questions that are required to predict plasma – materials interactions of tungsten surfaces in fusion energy devices. These key questions which will be resolved by high performance computing include:
- What are the controlling kinetic processes (e.g., defect and impurity concentrations, surface diffusion, etc.) responsible for the formation of a nanoscale ‘fuzz’ on tungsten surfaces subject to high temperature He plasmas?
- What exposure conditions (e.g., a phase boundary map of temperature, dose, dose rate, impurities) lead to nanoscale ‘fuzz’ or other detrimental surface evolution?
- How much tungsten mass loss occurs into the plasma as a result of nanoscale ‘fuzz’ formation? And, finally, how can this surface evolution be mitigated?
- What are the controlling He – defect and hydrogen/deuterium/tritium interaction mechanisms that influence hydrogen permeation and retention?
- What plasma impurities increase sputtering yields of tungsten? What mitigation measures are possible to reduce tungsten mass loss?