News
NPACI Releases ROCKS Open-Source Toolkit for Installing and Managing High-Performance Clusters
Published October 29, 2000
Contact:
David Hart, SDSC, 858-534-8314, dhart@sdsc.edu
To accelerate the deployment of high-performance commodity clusters, the National Partnership for Advanced Computational Infrastructure (NPACI) has released version 1.0 of its NPACI Rocks software, a set of open-source enhancements for managing Linux-based clusters. NPACI Rocks, which will be demonstrated in the NPACI research exhibit at SC2000, has been used to build and install the new Meteor cluster at the San Diego Supercomputer Center (SDSC) as well as several other clusters at UC San Diego, forming the start of a campus cluster grid environment.
NPACI Rocks (http://rocks.npaci.edu/) is a set of open-source enhancements to Red Hat Linux. Rocks adds an extensible management style specific to clusters, some important augmentations to the RedHat installation, a bootable CD-ROM, a cluster configuration database (based on MySQL) and a number of cluster-specific packages. Rocks is aimed at tightly-coupled clusters and directly supports low-latency interconnects including Myrinet and ServernetII.
"Because of SDSC's long experience in production high-performance computing for scientific research, we and our NPACI partners believe >it will be useful to apply our experience to exploring ways to make clusters more useful for production scientific computing," said Sid >Karin, director of SDSC and the National Partnership for Advanced Computational Infrastructure (NPACI). "The open source movement >gives us the opportunity to distribute our management tools worldwide."
The NPACI cluster project is focusing on three goals: understanding and resolving the issues required to operate production, general-purpose commodity clusters with thousands of nodes; working with cluster users at UC San Diego to reduce the time required to build and support departmental clusters; and collaborating with NPACI and other partners to build, collect, and disseminate cluster infrastructure software.
"Our goals are to help with all phases of building and operating these clusters, including federation of these clusters into the beginnings of a campus grid for computing," said Phil Papadopoulos, >leader of the SDSC clusters group. "We've concentrated on a robust management style as a key for building rock-solid high-performance clusters that simultaneously advance traditional computational science as well as computer science." SDSC's clusters group also includes SDSC staff Greg Bruno, Mason Katz, and graduate student Federico Sacerdoti.
With a management model influenced by the UC Berkeley Millennium Rootstock Project, NPACI Rocks is intended to be a starting point and collection point for cluster software within NPACI. Collaborators include David Culler, leader of the Millenium project, >Eric Fraser, Matt Massie, Brent Chun, and Albert Goto at UC Berkeley.
For example, key software from the Millennium group such as Rexec for secure, scalable, interactive job startup and Ganglia, a cluster monitoring toolkit, are included as standard Red Hat packages in the Rocks distribution. The NPACI Rocks software addresses the challenges in scaling clusters to larger numbers of machines by automating installation >and maintenance tasks. A complete cluster install or unattended re-installation can be accomplished in under 10 minutes. Automatic scheduling of upgrades can be done through a batch system or periodic script.
"Some people might consider this boring, but this will be extremely useful not only for installing software updates, but also for troubleshooting," Papadopoulos said. "Instead of trying to remember >or determine which machine is a bit different from the others, you can use this automated process to quickly reinstall identical software on all the machines and start fresh."
To establish an NPACI Rocks testbed and lay the groundwork for possible future deployment of an NPACI ultra-scale commodity cluster, the first HPC cluster has been deployed at SDSC. The Meteor cluster comprises 25 dual-processor Compaq ProLiant nodes and 35 dual-processor IBM Netfinity nodes connected by a Myrinet interconnect. The 120-processor system has 30 GB of memory, 546 GB disk storage, and a peak performance of 90 Gflops. In the coming year, the clusters team plans to add another 104 nodes, some with IA-64 processors.
NPACI Rocks has also been used to establish several other clusters at UC San Diego, including systems for the Scripps Institution of Oceanography (SIO) and two SDSC satellite sites funded by the W.M. Keck Foundation of Los Angeles. Other research groups are currently evaluating the Rocks toolkit. In addition to IBM and Compaq hardware, Rocks also runs unchanged on a Dell Dimension cluster at SIO.
More information on NPACI Rocks, including software downloads and instructions for requesting a CD-ROM, will soon be available on the Web at http://rocks.npaci.edu/.
SDSC is an organized research unit of the University of California, San Diego, and the leading-edge site of the NPACI (http://www.npaci.edu/). SDSC is funded by the National Science Foundation through NPACI and other federal agencies, the State and University of California, and private organizations. For additional information about SDSC and NPACI, see http://www.sdsc.edu/ or contact David Hart, dhart@sdsc.edu, 858-534-8314.
The National Partnership for Advanced Computational Infrastructure (NPACI) unites 46 universities and research institutions to build the computational environment for tomorrow's scientific discovery. Led by UC San Diego and the San Diego Supercomputer Center (SDSC), NPACI is funded by the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) program and receives additional support from the State and University of California, other government agencies, and partner institutions. The NSF PACI program also supports the National Computational Science Alliance. For additional information about NPACI, see http://www.npaci.edu/, or contact David Hart at SDSC, 858-534-8314, dhart@sdsc.edu.