NERSC Center News
Hybrid Multicore Consortium Tackles Programming Challenges
While hybrid multicore technologies will be a critical component in future high-end computing systems, most of today's scientific applications will require a significant re-engineering effort to take advantage of the resources provided by these systems. To address this challenge, three U.S. Department of Energy national laboratories, including the Berkeley Lab, and two leading universities have formed the Hybrid Multicore Consortium, or HMC, and held their first meeting at SC09. Read More »
Berkeley Lab Selects IBM Technology to Power Cloud Computing Research
IBM and the Lawrence Berkeley National Laboratory (Berkeley Lab) announced today that an IBM System x iDataPlex server will run the Lab's program to explore how cloud computing can be used to advance scientific discovery. Read More »
NERSC Results Help University of Florida Student Win Metropolis Award
Chao Cao was awarded the 2009 Metropolis Award for outstanding doctoral thesis work in computational physics earlier this year by the American Physical Society. His award-winning thesis, "First-Principles and Multi-Scale Modeling of Nano-Scale Systems," was honored for creatively using a variety of computational tools to reveal physical mechanisms in complex materials, and for developing a computing architecture that allows massively parallel multi-scale simulation of physical systems. Read More »
NERSC Uses Stimulus Funds to Overcome Software Challenges for Scientific Computing
A "multi-core" revolution is occurring in computer chip technology. No longer able to sustain the previous growth period where processor speed was continually increasing, chip manufacturers are instead producing multi-core architectures that pack increasing numbers of cores onto the chip. In the arena of high performance scientific computing, this revolution is forcing programmers to rethink the basic models of algorithm development, as well as parallel programming from both the language and parallel decomposition process. Read More »
Berkeley Lab Researchers Prepare U.S. Climate Community for 100-Gigabit Data Transfers
Climate 100, funded with $201,000 under the American Recovery and Reinvestment Act, will bring together middleware and network researchers to develop the needed tools and techniques for moving unprecedented amounts of data. Read More »
Increase in IO Bandwidth to Enhance Future Understanding of Climate Change
Researchers at Pacific Northwest National Laboratory (PNNL)—in collaboration with the National Energy Research Scientific Computing Center (NERSC) located at the Lawrence Berkeley National Laboratory, Argonne National Laboratory, and Cray—recently achieved an effective aggregate IO bandwidth of 5 Gigabytes/sec for writing output from a global atmospheric model to shared files on DOE's "Franklin," a 39,000-processor Cray XT4 supercomputer located at NERSC. The work is part of a Science Application Partnership funded under DOE's SciDAC program. Read More »
National Energy Research Scientific Computing Center (NERSC) Awards Supercomputer Contract to Cray
The Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory announced today that a contract for its next generation supercomputing system will be awarded to Cray Inc. The multi-year supercomputing contract includes delivery of a Cray XT5™ massively parallel processor supercomputer, which will be upgraded to a future-generation Cray supercomputer. When completed, the new system will deliver a peak performance of more than one petaflops, equivalent to more than one quadrillion calculations per second. Read More »
NERSC Hosts Workshop About the Dawn of Exascale Storage
This month, the Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) hosted the first workshop that discussed strategies for managing and storing the influx of new archival data that will be produced in the exascale era, when supercomputers will be capable of achieving quintillions (1,000 quadrillion) of calculations per second. Experts predict that DOE's first exascale supercomputer for scientific research will be deployed in 2018. Read More »
Jeff Broughton Brings 30 Years of HPC Experience to NERSC as New Head of Systems Department
Jeffrey M. Broughton, who has 30 years of HPC and management experience, has accepted the position of Systems Department Head at the Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC). Broughton, who most recently served as senior director of engineering at QLogic Corp., joins NERSC on Monday, August 3. Read More »
NERSC's Franklin Supercomputer Upgraded to Double Its Scientific Capability
The Department of Energy's (DOE) National Energy Research Scientific Computing (NERSC) Center has officially accepted a series of upgrades to its Cray XT4 supercomputer, providing the facility's 3,000 users with twice as many processor cores and an expanded file system for scientific research. NERSC's Cray supercomputer is named Franklin in honor of Benjamin Franklin, the United States' pioneering scientist. Read More »
NERSC Builds Gateways for Science Sharing
Programmers at the Department of Energy's National Energy Scientific Research Computing Center (NERSC) are working with science users to design custom web browser interfaces and analytics tools, a service called “science gateways,” which will make it easier for them to share their data with a larger community of researchers. Read More »
NERSC Delivers 59.9 Petabytes of Storage with Cutting-Edge Technology
NERSC's High Performance Storage System (HPSS) can now hold 59.9 petabytes of scientific data — equivalent to all the music, videos or photos that could be stored on approximately 523,414 iPod classics filled to capacity. This 37-petabyte increase in HPSS storage was made possible by deploying cutting-edge technologies Read More »
VisIt Supported on Cray Systems
Scientists computing on NERSC's Cray XT4 system, called Franklin, can have it all. Now that VisIt, one of the most popular frameworks for scientific visualization, is available on Franklin, users can run their simulations on the machine and visualize the output there too. Read More »
Speeding Up Science Data Transfers Between Department of Energy Facilities
As scientists conduct cutting-edge research with ever more sophisticated techniques, instruments, and supercomputers, the data sets that they must move, analyze, and manage are increasing in size to unprecedented levels. The ability to move and share data is essential to scientific collaboration, and in support of this activity network and systems engineers from the Department of Energy's (DOE) Energy Sciences Network (ESnet), National Energy Research Scientific Computing Center (NERSC) and Oak Ridge Leadership Computing Facility (OLCF) are teaming up to optimize wide-area network (WAN) data transfers. Read More »
NERSC Increases System Storage and Security for Users
Throughout the month of March the Cray XT4 machine Franklin underwent a series of upgrades and improvements, including a major I/O upgrade. The disk capacity of the scratch file system was increased by 30% to 460 TB, and the I/O bandwidth was nearly tripled to an aggregate write performance of 32 GB/sec, compared to 11 GB/s before the upgrade. Read More »
Berkeley Lab Checkpoint Restart Improves Productivity
The new version Berkeley Lab Checkpoint Restart (BCLR) software, released in January 2009, could mean that scientists running extensive calculations will be able to recover from major crashes – if they are running on a Linux system. This open-source software preemptively saves the state of applications using the Message Passing Interface (MPI), the most widely used mechanism for communication among processors working concurrently on a single problem. Read More »
Green Flash Project Runs First Prototype Successfully
Berkeley Lab’s Green Flash project, which is exploring the feasibility of building a new class of energy-efficient supercomputers for climate modeling, has successfully reached its first milestone by running the atmospheric model of a full climate code on a logical prototype of a Green Flash processor. Read More »