Expanse

expanse-large-image.jpg

Computing Without Boundaries

Expanse supports SDSC’s vision of “Computing without Boundaries” by increasing the capacity and performance for thousands of users of batch-oriented and science gateway computing, and by providing new capabilities that will enable research increasingly dependent upon heterogeneous and distributed resources composed into integrated and highly usable cyberinfrastructure.

hpc_expanse_overview.png

 

System Performance Key Features
Expanse 5 Pflop/s peak; 93,184 CPU cores; 208 NVIDIA GPUs; 220 TB total DRAM; 810 TB total NVMe

Standard Compute Nodes (728 total)
AMD EPYC 7742 (Rome) Compute Nodes; 2.25 GHz; 128 cores per node; 1 TB NVMe per node; 256 GB DRAM per node

GPU Nodes (52 total)
NVIDIA V100s SMX2 with 4 GPUs per node; 40 6248 Xeon CPU cores per node; 384 GB CPU DRAM per node; 2.5 GHz CPU clock speed; 32 GB memory per GPU; 1.6 TB NVMe per node; connected via NVLINK

Large-memory Nodes (4 total)
AMD Rome nodes; 2 TB DRAM per node; 3.2 TB SSD memory per node; 128 cores per node; 2.25 GHz

Interconnect
HDR InfiniBand, Hybrid Fat-Tree topology; 100 Gb/s (bidirectional) link bandwidth; 1.17-x.xx µs MPI latency

Storage Systems
Access to Lustre (12 PB) and Ceph (7 PB) storage

SDSC Scalable Compute Units (13 total)

Entire system organized as 13 complete SSCUs, consisting of 56 standard nodes and four GPU nodes connected with 100 GB/s HDR InfiniBand

Trial Accounts

Trial Accounts give users rapid access for the purpose of evaluating Expanse for their research. This can be a useful step in accessing the value of the system by allowing potential users to compile, run and do initial benchmarking of their application prior to submitting a larger Startup or Research allocation. Trial Accounts are for 1,000 core-hours, and requests are fulfilled within one (1) working day.

Back to top