High-Performance Computing

SDSC is Leading the Way

For 40 years, SDSC has led the way in developing and delivering high-performance computing (HPC) systems for a wide range of users, from the University of California to the national research community. From the earliest Cray systems to today’s data-intensive systems, SDSC has provided innovative architectures designed to keep pace with the changing needs of science and engineering.

Whether you’re a researcher looking to expand computing beyond your lab or a business looking for that competitive advantage, SDSC’s HPC experts will guide potential users in selecting the right resource.

expanse_logo_card-v2.jpg

Expanse

Performance

5 Pflop/s peak; 93,184 CPU cores; 208 NVIDIA GPUs; 220 TB total DRAM; 810 TB total NVMe

Key Features

Standard Compute Nodes (728 total)

AMD EPYC 7742 (Rome) Compute Nodes; 2.25 GHz; 128 cores per node; 1 TB NVMe per node; 256 GB DRAM per node

GPU Nodes (52 total)

NVIDIA V100s SMX2 with 4 GPUs per node; 40 6248 Xeon CPU cores per node; 384 GB CPU DRAM per node; 2.5 GHz CPU clock speed; 32 GB memory per GPU; 1.6 TB NVMe per node; connected via NVLINK

Large-memory Nodes (4 total)

AMD Rome nodes; 2 TB DRAM per node; 3.2 TB SSD memory per node; 128 cores per node; 2.25 GHz

Interconnect

HDR InfiniBand, Hybrid Fat-Tree topology; 100 Gb/s (bidirectional) link bandwidth; 1.17-x.xx µs MPI latency

Storage Systems

Access to Lustre (12 PB) and Ceph (7 PB) storage

SDSC Scalable Compute Units (13 total)

Entire system organized as 13 complete SSCUs, consisting of 56 standard nodes and four GPU nodes connected with 100 GB/s HDR InfiniBand

Learn more about Expanse
tscc_logo_card-v2.jpg

Triton Shared Computing Cluster (TSCC)

Performance

80+ Tflop/s

Key Features

General Computing Nodes

Dual-socket, 12-core, 2.5GHz Intel Xeon E5-2680 (coming) and Dual-socket, 8-core, 2.6GHz Intel Xeon E5-2670

GPU Nodes

Host Processors: Dual-socket, 6-core, 2.6GHz Intel Xeon E5-2630v2 GPUs: 4 NVIDIA GeForce GTX 980

Large-memory Nodes (4 total)

AMD Rome nodes; 2 TB DRAM per node; 3.2 TB SSD memory per node; 128 cores per node; 2.25 GHz

Interconnect

10GbE (QDR InfiniBand optional)

Lustre-based Parallel File System

Access to Data Oasis

Learn more about TSCC
voyager_logo_card.jpg

Voyager

Performance

Xxxxxx

Key Features

General Computing Nodes

Xxxxxx

GPU Nodes

Xxxxxx

Large-memory Nodes (4 total)

Xxxxxx

Interconnect

Xxxxxx

Lustre-based Parallel File System

Xxxxxx

Learn more about Voyager
cosmos_logo_card.jpg

Cosmos

Performance

Xxxxxx

Key Features

General Computing Nodes

Xxxxxx

GPU Nodes

Xxxxxx

Large-memory Nodes (4 total)

Xxxxxx

Interconnect

Xxxxxx

Lustre-based Parallel File System

Xxxxxx

Learn more about Cosmos
Back to top