Delivering Efficient Parallel I/O with HDF5 on Exascale Computing Systems
Delivering Efficient Parallel I/O with HDF5 on Exascale Computing Systems
February 1, 2018
Suren Byna of the Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) and Quincey Koziol of the National Energy Research Scientific Computing Center (NERSC), which is located at Berkeley Lab, are among the co-principal investigators for the ExaHDF5 project. They spoke with Exascale Computing Project (ECP) Communications at SC17. This is an edited transcript of the conversation.
Can you give us a high-level description of what your project is about?
Byna: Our project is called ExaHDF5, which is making HDF5 ready for exascale computing. HDF5 is one of the most popular parallel I/O libraries that is used on most of the high-end supercomputing systems, and HDF5 has been there for almost 20 years now. Many applications in the ECP also use HDF5, and more applications are adding the HDF5 API to their I/O needs. Beyond the scientific applications in ECP, HDF5 is used in various fields of industry, including aerospace engineering, satellite data, financial technologies. So overall, HDF5 development is useful for a large number of applications within and outside the ECP.
What do you have to do to get prepared for the exascale era?
Byna: The HDF5 library is providing parallel I/O technology basically to store and retrieve data fast on HPC systems. And HPC systems are evolving as we move toward exascale in multiple areas. One [example] is that the hardware is adding more levels of storage and memory, so the parallel I/O has to adapt to that. The applications are producing more data in volume. The complexity of the application data is also increasing. And the speed of data generation and analysis is increasing, so to capture all these challenges, hardware, and software applications, we need to add new features into HDF5 that can cope with this complexity all over the exascale architectures.
Quincey, tell us about your role in the research. Where do you focus your energy?
Koziol: Well, in the past 20 years, I’ve been the chief architect for the software package, working very consistently from its inception back in 1997. And I have pushed the state of the software forward over that period of time to greater and greater scales and further features that applications need. And as Suren was saying, there’s this change coming along with deeper memory hierarchies and storage hierarchies, much, much higher scales of systems that are being put out there, and so I fill a lot of the roles in designing and architecting the software to reach those places for application needs.
What do you see as the biggest challenges right now with your research?
Koziol: In a general sense, I think scalability for the applications in being able to look forward enough for them to be able to transition over some of the forthcoming future changes in file system technology and, as I said earlier, scale for things.
Have you developed new collaborative relationships as a result of working on this project?
Byna: Yes. HDF5 is used by many applications already, but new avenues have opened up as part of the ECP. So one of the examples is the AMR co-design center, called AMReX. It was using their custom binary data format, but when it was funded, they talked to us, and we have been pursuing how to make their data usable with HDF5, which offers more portability of data for long term, so that is one of the major contributors which has seven or eight applications underneath. And Brian Van Straalen from the Chombo AMR library is someone with whom we have an existing collaboration that is continuing as part of the ECP. It also opened HDF5 up to new software technology collaboration such as with ADIOS and DataLib teams, so we are working with them on how to read HDF data or read netCDF data, that kind of thing. And of course, data compression, how to use memory hierarchies much more easily, so all those projects are there in the ECP, and, truth is, we are collaborating with most of them as well. So the ECP opened up a lot of collaborations with software technologies and applications.
Has your research taken advantage of any of the time on the ECP’s allocated computer allotments?
Byna: Yes. One of the features that we are developing is a bit more futuristic in terms of how to mold the data in the hierarchy of storages. New systems, Cori and Theta, at Argonne are providing those multilayer storage hierarchies, so we are testing and using those resources through the ECP.
If your project weren’t a part of the ECP, what would you be doing differently?
Byna: That’s a difficult question. We would have been doing the work, but the funding might have been different, and it would have been more of a struggle to get the funding.
Koziol: The ECP really allowed us to expand beyond the boundaries where we’ve been able to do things before. ECP funding really puts a lot more momentum behind what we were able to do before. It really was two or three times the amount of effort that we were previously able to devote to the project. Before, we were struggling much more to kind of fit in as many features and plan for this kind of future, and, with the ECP project, it’s opened up a much broader array of applications to collaborate with, plus more funding to actually be able to support the features we need.
Has the ECP changed your timeframe?
Koziol: Yes, I would say it probably brought in two times sooner the kind of things we could do. I’m guessing, but, you know, that kind of idea.
Can you talk about some milestones you’ve achieved so far?
Byna: One milestone that we have achieved so far is a feature called Data Elevator. The Data Elevator, as it says, is an elevator between different stages of memory hierarchy. It moves the data more transparently so that applications just have to use HDF5 API and internally we take over how to move the data between the layers, so that is one of the features. Another feature Quincey is working on is Full SWMR.
Koziol: SWMR is short for single-writer, multiple-readers. It enables applications—particularly for experimental and observational places like the Linac Coherent Light Source at the SLAC National Accelerator Laboratory, which has very high bandwidth streams of data that need to be monitored in real time by a group of readers looking at that data, to concurrently access the data as it’s being produced at the full speed of the detectors operating there. So the capability to do that is something brand new. We’ve prototyped it before, but we’ve never actually been able to bring it all the way out to the application level to really fully support their needs in these kinds of areas. And this is an important capability that we’ve been trying for 5 or 6 years to leverage and get out to production. The ECP has really enabled that.
What’s your vision for this project? What do you see happening the next few years?
Byna: We did propose a lot more features in HDF5. Some of them are querying and indexing the data and metadata and HDF5. Those will open up to more ECP applications as well as other industry applications. Overall, expanding the use of HDF5 within the DOE complex as well as outside even further and providing the performance at large scale is one of the major goals that we have been considering.
How important do you think what you just described is to the holistic effort to build a capable exascale ecosystem?
Koziol: Looking at the 22 ECP apps that got funded, 17 want to use HDF5, so HDF5 is an important technology that needs to be ready for exascale.
Is there anything else you’d like to add that we’ve not discussed?
Byna: I’m in the computational research division, so I also do research on more futuristic technologies. One of the things that Quincey and I work on is called the proactive data containers. These are more of using the object-store ideas of the future. Right now, everything is parallel file systems, POSIX-based file systems, and we want to move on to the future, which is object-based storage. So this proactive data containers project is looking into how to provide an API or the programming interface for the users to take advantage of object stores, and it makes most of the data management transparent enough that the users have not had to worry too much about where their data is, how to access the data, that kind of thing. We can make them much more automatic and transparent. That is one of the things we are working on.
Is there anything else you want to bring up?
Koziol: I also want to mention that this is a collaborative project, so we are working with researchers at Argonne as well as one of the originators of HDF5, The HDF Group, and reaching out to Oak Ridge and other places. We tried to make this as broad-based as possible. There’s a tremendous amount of application buy-in, both within DOE and outside of it, so this is a very, very broad infrastructure-based project. So that’s just a shout-out to all our collaborators.
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.