NERSC, IBM Collaborate on New Software Strategy To Simplify Supercomputing
December 1, 2005
This month’s IBM announced an innovative software strategy in supercomputing which allows customers to leverage the General Parallel File System (GPFS) across mixed-vendor supercomputing systems for the first time. This strategy is the result of a direct partnership with NERSC.
GPFS is an advanced file system for high performance computing clusters that provides high speed file access to applications executing on multiple nodes of a Linux or AIX cluster. GPFS’s scalability and performance are designed to meet the needs of data-intensive applications such as engineering design, digital media, data mining, financial analysis, seismic data processing and scientific research.
“Thank you for driving us in this direction,” wrote IBM Federal Client Executive Mike Henesy to Kramer as IBM announced the project. “It's quite clear we would never have reached this point without your leadership!”
Staff at NERSC used GPFS to create a scalable parallel file system that is capable of supporting hundreds of terabytes of storage within a single highly reliable file system. In November 2005, NERSC implemented a production version of the NERSC Global Filesystem (NGF) using GPFS. This was demonstrated at the SC|05 conference and used for several HPC Analytics and StorCloud Challenges at SC|05.
The production NGF grew out of the multi-year Global Unified Parallel File System (GUPFS) Project at NERSC to provide a scalable, high-performance, high-bandwidth, shared-disk file system for use by all of NERSC's high-performance production computational systems. NGF provides unified file namespace for these systems and is being integrated with the High Performance Storage System (HPSS), while performing at or very close to rates achieved by parallel filesystems within a cluster. Itl is also possible to distribute GPFS-based file systems to remote facilities as local file systems over the Internet.
”We spent a long time looking at all the possible configurations of storage hardware, fabric hardware and filesystem software during the GUPFS project. Eventually we realized the critical component was filesystem software,” said NERSC General Manager Bill Kramer. “It came down to a limited number of choices, and GPFS was superior in many aspects, but limited only to IBM hardware. NERSC helped convince IBM to make GPFS available on any vendor’s hardware — a requirement from NERSC’s point of view. We are happy IBM decided to take this step to make GPFS more open.”
The typical state of many high performance computational environments is one in which each large computational and support system has its own large, independent disk store, with additional Network Attached Storage (NAS), such as NFS or DFS, and an archival storage server such as HPSS. These approaches lead to wasteful replication of customer files on multiple systems and an increased, nonproductive workload on customers to move and manage these files. This, in turn, creates a burden on the infrastructure to support file transfers between the systems as well as to the storage server. In addition, the existing environment prevents the consolidation of storage between systems, thus limiting the amount of working storage available to each system’s local disk capacity.
The environment envisioned by the NERSC GUPFS project is one in which the large high performance computational systems and support systems can access a consolidated disk store. NGF is the first step toward that vision and currently is supporting five major systems — IBM Power 3+ SP, IBM Power 5 SP, SGI Altix, Linux Networx Opteron/InfinBand cluster and the PDSF Intel/Ethernet cluster, totaling over 1,200 client nodes.
“In terms of vendors and number of nodes, this it the largest, most diverse implementation of a global filesystem we know of,” said Greg Butler, a computer engineer at NERSC who has worked on GUPFS and NGF. “Not only is it leading edge technology but it is standing up to real production requirements supporing a wide range of science. Our users are very pleased with the NGF because it makes their work simplier and easier.”
A major use of the file system will be in support of parallel scientific applications performing high volume concurrent and simultaneous I/O. This environment will eliminate unnecessary data replication, simplify the customer environment, provide better distribution of storage resources, and permit the management of storage as a separate entity while minimizing impacts on the computational systems. NGF directly accesses storage through multivendor shared-disk file systems with a unified file namespace. Storage servers, accessing the consolidated storage using the shared-disk file systems, would provide hierarchical storage management (HSM), backup, and archival services. The deployed file system will be integrated with the NERSC HPSS archival system in 2006.
A heterogeneous approach for NGF is a key component of “Science-Driven Computing,” NERSC’s five-year plan recently published at <http://www.nersc.gov/news/reports/>. This approach is important because NERSC typically procures a major, new computational system every three years, then operates it for five years to support DOE research. Consequently, NERSC operates in, a heterogeneous environment with systems from multiple vendors, multiple platforms, different system architectures, and multiple operating systems. The deployed file system must operate in the same heterogeneous client environment throughout its life time.
GPFS/HPSS Development
NERSC’s Mass Storage Group collaborated with IBM to develop a Hierarchical Storage Manager (HSM) that can be used with IBM’s GPFS. The HSM capability with GPFS will provide a recoverable GPFS file system that is transparent to users and fully backed up and recoverable from NERSC’s multi-petabyte archive on HPSS.
One of the key capabilities of the GPFS/HPSS HSM is that users’ files will automatically be backed up on HPSS as they are created. Additionally, files on the GPFS which have not been accessed for a specified period of time will be automatically migrated from on-line resources as space is needed by users for files currently in use. Since the purged files were already migrated/backed up on HPSS, they can easily by automatically retrieved by users when needed.
“This gives the user the appearance of almost unlimited storage space without the cost,” said NERSC’s Mass Storage Group Leader Nancy Meyer.The first system test of HSM will be at the San Diego Supercomputer Center and is scheduled to begin in December 2005. In spring of 2006, HSM will begin testing on Linux systems at NERSC.
As deployed at NERSC, NGF is expected to have a long life, 10 to 15 years or more. It is expected that during this time the file system will change and evolve, as will the systems in the center that are utilizing it. It is also expected that the user data will have long term persistence in the file system, ranging from months and years up to the deployed life of the file system, at the discretion of the users.
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.