Kirill Lozinskiy
Kirill Lozinskiy is a computer systems engineer at NERSC. He has been a member of the Storage Systems Group since 2016, where he shares the responsibility for maintaining the HPSS archive and various global filesystems containing hundreds of petabytes and storing many decades of scientific research.
Before coming to NERSC, Kirill was at the Broad Institute, a biomedical and genomic research center, where he maintained over 30 PB of file and object storage and worked on HPC cloud computing initiatives with a focus on DevOps and automation.
Conference Papers
Lisa Gerhardt, Stephen Simms, David Fox, Kirill Lozinskiy, Wahid Bhimji, Ershaad Basheer, Michael Moore, "Nine Months in the Life of an All-flash File System", Proceedings of the 2024 Cray User Group, May 8, 2024,
NERSC’s Perlmutter scratch file system, an all-flash Lustre storage system running on HPE (Cray) ClusterStor E1000 Storage Systems, has a capacity of 36 PetaBytes and a theoretical peak performance exceeding 7 TeraBytes per second across HPE’s Slingshot network fabric. Deploying an all-flash Lustre file system was a leap forward in an attempt to meet the diverse I/O needs of NERSC. With over 10,000 users representing over 1,000 different projects that span multiple disciplines, a file system that could overcome the performance limitations of spinning disk and reduce performance variation was very desirable. While solid state provided excellent performance gains, there were still challenges that required observation and tuning. Working with HPE’s storage team, NERSC staff engaged in an iterative process that increased performance and provided more predictable outcomes. Through the use of IOR and OBDfilter tests, NERSC staff were able to closely monitor the performance of the file system at regular intervals to inform the process and chart progress. This paper will document the results of and report insights derived from over 9 months of NERSC’s continuous performance testing, and provide a comprehensive discussion of the tuning and adjustments that were made to improve performance.
Glenn K Lockwood, Alberto Chiusole, Lisa Gerhardt, Kirill Lozinskiy, David Paul, Nicholas J Wright, "Architecture and Performance of Perlmutter’s 35 PB ClusterStor E1000 All-Flash File System", Proceedings of the 2021 Cray User Group, May 3, 2021,
NERSC's newest system, Perlmutter, features a 35 PB all-flash Lustre file system built on HPE Cray ClusterStor E1000. We present its architecture, early performance figures, and performance considerations unique to this architecture. We demonstrate the performance of E1000 OSSes through low-level Lustre tests that achieve over 90% of the theoretical bandwidth of the SSDs at the OST and LNet levels. We also show end-to-end performance for both traditional dimensions of I/O performance (peak bulk-synchronous bandwidth) and non-optimal workloads endemic to production computing (small, incoherent I/Os at random offsets) and compare them to NERSC's previous system, Cori, to illustrate that Perlmutter achieves the performance of a burst buffer and the resilience of a scratch file system. Finally, we discuss performance considerations unique to all-flash Lustre and present ways in which users and HPC facilities can adjust their I/O patterns and operations to make optimal use of such architectures.
Glenn K. Lockwood, Kirill Lozinskiy, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, "Designing an All-Flash Lustre File System for the 2020 NERSC Perlmutter System", Proceedings of the 2019 Cray User Group, Montreal, January 1, 2019,
New experimental and AI-driven workloads are moving into the realm of extreme-scale HPC systems at the same time that high-performance flash is becoming cost-effective to deploy at scale. This confluence poses a number of new technical and economic challenges and opportunities in designing the next generation of HPC storage and I/O subsystems to achieve the right balance of bandwidth, latency, endurance, and cost. In this paper, we present the quantitative approach to requirements definition that resulted in the 30 PB all-flash Lustre file system that will be deployed with NERSC's upcoming Perlmutter system in 2020. By integrating analysis of current workloads and projections of future performance and throughput, we were able to constrain many critical design space parameters and quantitatively demonstrate that Perlmutter will not only deliver optimal performance, but effectively balance cost with capacity, endurance, and many modern features of Lustre.
Book Chapters
Glenn K. Lockwood, Kirill Lozinskiy, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, "A Quantitative Approach to Architecting All-Flash Lustre File Systems", ISC High Performance 2019: High Performance Computing, edited by Michele Weiland, Guido Juckeland, Sadaf Alam, Heike Jagode, (Springer International Publishing: 2019) Pages: 183--197 doi: 10.1007/978-3-030-34356-9_16
New experimental and AI-driven workloads are moving into the realm of extreme-scale HPC systems at the same time that high-performance flash is becoming cost-effective to deploy at scale. This confluence poses a number of new technical and economic challenges and opportunities in designing the next generation of HPC storage and I/O subsystems to achieve the right balance of bandwidth, latency, endurance, and cost. In this work, we present quantitative models that use workload data from existing, disk-based file systems to project the architectural requirements of all-flash Lustre file systems. Using data from NERSC’s Cori I/O subsystem, we then demonstrate the minimum required capacity for data, capacity for metadata and data-on-MDT, and SSD endurance for a future all-flash Lustre file system.
Presentation/Talks
Nicholas Balthaser, Francis Dequenne, Melinda Jacobsen, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, NERSC HPSS Site Update, 2021 HPSS User Forum, October 20, 2021,
- Download File: NERSC-Site-HUF-2021.pdf (pdf: 10 MB)
Nicholas Balthaser, Wayne Hurlbert, Melinda Jacobsen, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, NERSC HPSS Site Update, 2020 HPSS User Forum, October 9, 2020,
Report on recent projects and challenges running HPSS at NERSC, including recent AQI issues and upcoming HPSS upgrade.
Nicholas Balthaser, Damian Hazen, Wayne Hurlbert, Owen James, Kristy Kallback-Rose, Kirill Lozinskiy, Moving the NERSC Archive to a Green Data Center, Storage Technology Showcase 2020, March 3, 2020,
- Download File: archive-move-NERSC-2020-02-05.pptx (pptx: 22 MB)
Description of methods used and challenges involved in moving the NERSC tape archive to a new data center with environmental cooling.
Nicholas Balthaser, Wayne Hurlbert, Kirill Lozinskiy, Owen James, Regent System Move Update, NERSC All-to-All Meeting, December 16, 2019,
Update on moving the NERSC center backup system from the Oakland Scientific Facility to LBL Building 59.
Glenn K. Lockwood, Kirill Lozinskiy, Kristy Kallback-Rose, NERSC's Perlmutter System: Deploying 30 PB of all-NVMe Lustre at scale, Lustre BoF at SC19, November 19, 2019,
- Download File: 2019-11-Lustre-BOF-talk-KKR-no-notes.pptx (pptx: 3 MB)
Update at SC19 Lustre BoF on collaborative work with Cray on deploying an all-flash Lustre tier for NERSC's Perlmutter Shasta system.
Nicholas Balthaser, Kirill Lozinskiy, Melinda Jacobsen, Kristy Kallback-Rose, NERSC Migration from Oracle tape libraries GPFS-HPSS-Integration Proof of Concept, October 16, 2019,
- Download File: NERSC-HUF-Update-Oct-2019.pdf (pdf: 61 MB)
NERSC updates on Storage 2020 Strategy & Progress, GHI Testing, Tape Library Update, Futures
Kirill Lozinskiy, Glenn K. Lockwood, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, A Quantitative Approach to Architecting All‐Flash Lustre File Systems, Lustre User Group (LUG) 2019, May 15, 2019,
Kirill Lozinskiy, Lisa Gerhardt, Annette Greiner, Ravi Cheema, Damian Hazen, Kristy Kallback-Rose, Rei Lee, User-Friendly Data Management for Scientific Computing Users, Cray User Group (CUG) 2019, May 9, 2019,
Wrangling data at a scientific computing center can be a major challenge for users, particularly when quotas may impact their ability to utilize resources. In such an environment, a task as simple as listing space usage for one's files can take hours. The National Energy Research Scientific Computing Center (NERSC) has roughly 50 PBs of shared storage utilizing more than 4.6B inodes, and a 146 PB high-performance tape archive, all accessible from two supercomputers. As data volumes increase exponentially, managing data is becoming a larger burden on scientists. To ease the pain, we have designed and built a “Data Dashboard”. Here, in a web-enabled visual application, our 7,000 users can easily review their usage against quotas, discover patterns, and identify candidate files for archiving or deletion. We describe this system, the framework supporting it, and the challenges for such a framework moving into the exascale age.
Kristy Kallback-Rose, Tape Library and Media Evaluation and Planning - NERSC Site Update at the Linear Tape User Group, May 2, 2018,
- Download File: LTUG-2018-NERSC-Site-Update-compressed.pptx (pptx: 12 MB)
NERSC site update focusing on plans to implement new tape technology at the Berkeley Data Center.
Kristy Kallback-Rose, NERSC Site Report at HPC Advisory Council and HPCXXL User Group, April 12, 2018,
- Download File: HPCXXL-2018-NERSC-Site-Update-compressed.pptx (pptx: 13 MB)
NERSC Site Report focusing on plans for migration of tape-based system to new location and new technology, and collection of metrics for GPFS.
Kirill Lozinskiy, GPFS & HPSS Interface (GHI), Spectrum Scale User Group 2017, April 5, 2017,
- Download File: SSUG-2017-GHI.pptx (pptx: 2.6 MB)
This presentation gives a brief overview of integration between the High Performance Storage System (HPSS) and the General Parallel File System (GPFS).