NERSCPowering Scientific Discovery for 50 Years

Kristy Kallback-Rose

IMG 20170507 140232
Kristy A. Kallback-Rose
Group Lead
Storage Systems Group
National Energy Research Scientific Computing Center
Fax: (510) 486-6459
Lawrence Berkeley National Laboratory
1 Cyclotron Road
Mail Stop 59R4010A
Berkeley, CA 94720 us

Biographical Sketch

Kristy Kallback-Rose joined the NERSC Storage Systems Group in early 2017 as senior storage analyst and now leads the group. Kristy's recent work has been operationally focused on filesystems and archival storage, including GPFS and HPSS. Prior to that she worked in a variety of roles including grid computing in support of the ATLAS project, databases, development and instruction. Kristy has undergraduate degrees in Japanese and Physics and a masters in Physics and is pleased to be working at NERSC in support of scientific research. 

Conference Papers

K Kallback-Rose, D Antolovic, R Ping, K Seiffert, C Stewart, T Miller, "Conducting K-12 Outreach to Evoke Early Interest in IT, Science, and Advanced Technology", ACM, July 16, 2012,

This is a preprint of a paper presented at XSEDE '12: The 1st Conference of the Extreme Science and Engineering Discovery Environment, Chicago, Illinois.

Presentation/Talks

Nicholas Balthaser, Francis Dequenne, Melinda Jacobsen, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, NERSC HPSS Site Update, 2021 HPSS User Forum, October 20, 2021,

Nicholas Balthaser, Wayne Hurlbert, Melinda Jacobsen, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, NERSC HPSS Site Update, 2020 HPSS User Forum, October 9, 2020,

Report on recent projects and challenges running HPSS at NERSC, including recent AQI issues and upcoming HPSS upgrade.

Greg Butler, Ravi Cheema, Damian Hazen, Kristy Kallback-Rose, Rei Lee, Glenn Lockwood, NERSC Community File System, March 4, 2020,

Presentation at Storage Technology Showcase providing an update on NERSC's Storage 2020 Strategy & Progress and newly deployed Community File System, including data migration process.

Nicholas Balthaser, Damian Hazen, Wayne Hurlbert, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, Moving the NERSC Archive to a Green Data Center, Storage Technology Showcase 2020, March 3, 2020,

Description of methods used and challenges involved in moving the NERSC tape archive to a new data center with environmental cooling.

Glenn K. Lockwood, Kirill Lozinskiy, Kristy Kallback-Rose, NERSC's Perlmutter System: Deploying 30 PB of all-NVMe Lustre at scale, Lustre BoF at SC19, November 19, 2019,

Update at SC19 Lustre BoF on collaborative work with Cray on deploying an all-flash Lustre tier for NERSC's Perlmutter Shasta system.

Nicholas Balthaser, Kirill Lozinskiy, Melinda Jacobsen, Kristy Kallback-Rose, NERSC Migration from Oracle tape libraries GPFS-HPSS-Integration Proof of Concept, October 16, 2019,

NERSC updates on Storage 2020 Strategy & Progress, GHI Testing, Tape Library Update, Futures

Ravi Cheema, Kristy Kallback-Rose, Storage 2020 Strategy & Progress - NERSC Site Update at HPCXXL User Group Meeting, September 24, 2019,

NERSC site update including Systems Overview, Storage 2020 Strategy & Progress, GPFS-HPSS-Integration Testing, Tape Library Update and Futures.

Kirill Lozinskiy, Lisa Gerhardt, Annette Greiner, Ravi Cheema, Damian Hazen, Kristy Kallback-Rose, Rei Lee, User-Friendly Data Management for Scientific Computing Users, Cray User Group (CUG) 2019, May 9, 2019,

Wrangling data at a scientific computing center can be a major challenge for users, particularly when quotas may impact their ability to utilize resources. In such an environment, a task as simple as listing space usage for one's files can take hours. The National Energy Research Scientific Computing Center (NERSC) has roughly 50 PBs of shared storage utilizing more than 4.6B inodes, and a 146 PB high-performance tape archive, all accessible from two supercomputers. As data volumes increase exponentially, managing data is becoming a larger burden on scientists. To ease the pain, we have designed and built a “Data Dashboard”. Here, in a web-enabled visual application, our 7,000 users can easily review their usage against quotas, discover patterns, and identify candidate files for archiving or deletion. We describe this system, the framework supporting it, and the challenges for such a framework moving into the exascale age.

NERSC site update including Systems Overview, Storage 2020 Strategy & Progress and Superfacility Initiative.

NERSC site update focusing on plans to implement new tape technology at the Berkeley Data Center. 

NERSC Site Report focusing on plans for migration of tape-based system to new location and new technology, and collection of metrics for GPFS.

Reports

GK Lockwood, D Hazen, Q Koziol, RS Canon, K Antypas, J Balewski, N Balthaser, W Bhimji, J Botts, J Broughton, TL Butler, GF Butler, R Cheema, C Daley, T Declerck, L Gerhardt, WE Hurlbert, KA Kallback-Rose, S Leak, J Lee, R Lee, J Liu, K Lozinskiy, D Paul, Prabhat, C Snavely, J Srinivasan, T Stone Gibbins, NJ Wright, "Storage 2020: A Vision for the Future of HPC Storage", October 20, 2017, LBNL LBNL-2001072,

As the DOE Office of Science's mission computing facility, NERSC will follow this roadmap and deploy these new storage technologies to continue delivering storage resources that meet the needs of its broad user community. NERSC's diversity of workflows encompass significant portions of open science workloads as well, and the findings presented in this report are also intended to be a blueprint for how the evolving storage landscape can be best utilized by the greater HPC community. Executing the strategy presented here will ensure that emerging I/O technologies will be both applicable to and effective in enabling scientific discovery through extreme-scale simulation and data analysis in the coming decade.