On-Demand Learning

2021 HPC User Training Series

SDSC supports the training of its user community, including students, in all aspects of High-performance computing (HPC). The goal of the training is to prepare new HPC users to run jobs on HPC systems. Students who successfully complete the HPC Training program will receive an SDSC Certificate of Completion in HPC Training and UCSD Co-Curricular Record Credit (for students).

HPC User Training WebsiteInteractive Training

2022 Cyberinfrastructure-Enabled Machine Learning Summer Institute

*APPLICATION CLOSED* The CIML Summer Institute will involve introducing ML researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources, such as laptops and workstations, to large-scale HPC systems, such as the SDSC Expanse supercomputer.

Interactive Videos

2023 Cyberinfrastructure-Enabled Machine Learning Summer Institute

The CIML Summer Institute will involve introducing ML researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources, such as laptops and workstations, to large-scale HPC systems, such as the SDSC Expanse supercomputer. (Application deadline is Friday, April 7.)

2023 HPC and Data Science Summer Institute

The SDSC Summer Institute is a week-long workshop hosted by the San Diego Supercomputer Center (SDSC) at the University of California, San Diego focusing on a broad spectrum of introductory-to-intermediate topics in High Performance Computing and Data Science. The program is aimed at researchers in academia and industry, especially in domains not traditionally engaged in supercomputing, who have problems that cannot typically be solved using local computing resources. (Application deadline is Friday, May 19).

2024 Cyberinfrastructure-Enabled Machine Learning Summer Institute

The CIML Summer Institute will involve introducing ML researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources, such as laptops and workstations, to large-scale HPC systems, such as the SDSC Expanse supercomputer. Deadline to apply is Friday, April 12, 2024. 

2024 HPC and Data Science Summer Institute

The SDSC Summer Institute is a week-long workshop hosted by the San Diego Supercomputer Center (SDSC) at the University of California, San Diego focusing on a broad spectrum of introductory-to-intermediate topics in High Performance Computing and Data Science. The program is aimed at researchers in academia and industry, especially in domains not traditionally engaged in supercomputing, who have problems that cannot typically be solved using local computing resources. Deadline to apply is Friday, April 26, 2024

AMD EPYC Advanced User Training on Expanse

This event will help users to make the most effective use of Expanse’s AMD EPYC processors. Topics include an introduction to the EPYC architecture, AMD compilers and math libraries, strategies for mapping processes and tasks to compute cores, Slurm, application tuning and profiling tools.

Webinar RecordingGitHub Repository

AMD HPC User Forum - Member Sync at ISC24!

The AMD HPC User Forum is holding a private technical workshop prior to ISC High Performance covering the AMD Instinct™ MI300 Series products, as well as the ROCm™ stack for HPC and AI.

Azure OpenAI for Research Webinar hosted by CloudBank

The CloudBank project is hosting a webinar on OpenAI with Microsoft -- please join us to hear a quick introduction to CloudBank, get an overview of OpenAI from the Microsoft team, hear from one of our CloudBank researchers, and then have an opportunity to ask questions.

Comet 101: Accessing and Running Jobs on Comet

This webinar covers the basics of accessing the SDSC Comet supercomputer, managing the user environment, compiling and running jobs on Comet, where to run them, and how to run batch jobs. It is assumed that you have mastered the basics skills of logging onto Comet and running basic Unix commands. The webinar will include access to training material.

Interactive VideoGithub Repository

Comet to Expanse Transition Tutorial

This tutorial is intended for all current users of Comet who intend to make the transition to Expanse. Topics will include an overview of the system, batch job submission, modules, compilation, job charging, basic optimization, interactive computing and data transfer.

Interactive VideoGithub Repository

Comet to Expanse Transition Tutorial 2021

This tutorial is intended for all current users of Comet who intend to make the transition to Expanse. Topics will include an overview of the system, batch job submission, modules, compilation, job charging, basic optimization, interactive computing and data transfer.

Github RepositoryInteractive Video

Comet Webinar: A Quick Introduction to Machine Learning

Machine learning is an interdisciplinary field focused on the study and construction of computer systems that can learn from data without being explicitly programmed. Machine learning techniques can be used to uncover patterns in your data and gain insights into your problem.

Interactive VideoDownload slides

Comet Webinar: CUDA-Python and RAPIDS for blazing fast scientific computing

This webinar introduces users to Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations.

Interactive Video

Comet Webinar: Data Visualization With Python Using Jupyter Notebooks

Python is rapidly becoming the programming language of choice for scientific research, and Jupyter Notebooks provide a user-friendly way of writing and running python code and of teaching and learning how to program. Visual analytics is playing an increasingly important role in data science by allowing researchers to explore massive amounts of data for patterns which may not be obvious using other methods.

Interactive VideoDownload SlidesGithub Repository

Comet Webinar: Distributed Parallel Computing with Python

This webinar provides an introduction to distributed computing with Python, we will show how to modify a standard Python script to use multiple CPU cores using the concurrent.futures module from the Python standard library and then the dask package.

Interactive VideoDownload slidesGithub Repository

Comet Webinar: GPU Computing and Programming

This webinar provides an introduction to massively parallel computing with graphics processing units (GPUs) on the SDSC Comet supercomputer. The use of GPUs is becoming increasingly popular across all scientific domains since GPUs can significantly accelerate time to solution for many computational tasks. In this webinar, participants will learn how to access Comet GPU nodes, how to launch GPU jobs on Comet, and get introduced to GPU programming. The webinar will cover the essential background of GPU chip architectures and the basics of programming GPUs via the use of libraries, OpenACC compiler directives, and the CUDA programming language. The participants will thus acquire the foundation to use and develop GPU aware applications. 

Download RecordingDownload Slides

Comet Webinar: Introduction to Deep Learning

Deep learning has seen tremendous growth and success in the past few years. Deep learning techniques have achieved state-of-the-art performance across many domains, including image classification, speech recognition, and biomedical applications.

Interactive VideoDownload slides

Comet Webinar: Introduction to Expanse

The goal of this webinar is to provide an overview of Expanse, an upcoming NSF funded HPC resource at SDSC. Expanse will have nearly double the performance compared to the Comet supercomputer. With innovations in cloud integration and composable systems, as well as continued support for science gateways and distributed computing via the Open Science Grid, Expanse will allow researchers to push the boundaries of computing.

Interactive VideoDownload Slides

Comet Webinar: Introduction to Running Jobs on Comet

This webinar covers the basics of accessing the SDSC Comet supercomputer, managing the user environment, compiling and running jobs on Comet, where to run them, and how to run batch jobs. It is assumed that you have mastered the basics skills of logging onto Comet and running basic Unix commands. The webinar will include access to training material.

Interactive VideoDownload SlidesGithub Repository

Comet Webinar: Obtaining Hardware Information and Monitoring Performance

In this webinar we start by describing how to obtain hardware and system information such as CPU specifications, memory quantity, cache configuration, mounted file systems and their usage, peripheral storage devices and GPU properties. This information is useful for anyone who is interested in how hardware specs influence performance or who needs to report benchmarking data.

Interactive VideoDownload slides

Comet Webinar: Running Jupyter Notebooks on Comet

In this webinar, we will present SDSC’s multitiered approach to running notebooks more securely: running notebooks in the usual way using the insecure HTTP connections; hosting Jupyter services on Comet using HTTP over SSH Tunneling; and the SDSC Reverse Proxy Service (RPS) which connects the user over an HTTPS connection. When used, the RPS will launch a batch script that creates a securely hosted HTTPS access point for the user, resulting in a safer, more secure notebook environment.

Interactive VideoDownload slidesGithub Repository

Comet Webinar: Using the NVIDIA Rapids Toolkit on Comet

In this webinar we will show how to use RAPIDS to accelerate your data science applications utilizing libraries like cuDF (GPU-enabled Pandas-like dataframes) and cuML (GPU-accelerated machine learning algorithms).

Interactive VideoDownload slidesGithub Repository

Comet Webinar- Indispensable Security: Tips to Use SDSC's HPC Resources Securely

This webinar will highlight security-related topics that can improve the trustworthiness of your research. The topics covered include logging in to SDSC's HPC resources, file and directory permissions, and common practices that may create trouble.

Interactive VideoDownload Slides

COMPLECS: Batch Computing: Getting Started with Batch Job Scheduling - Slurm Edition

A brief introduction on how to schedule your batch jobs on high-performance computing systems using the Slurm Workload Manager.

Recorded WebinarInteractive Video

COMPLECS: Code Migration

Introduction to porting codes and workflows to HPC resources.

Link

COMPLECS: Data Storage and File Systems

How to use the data storage and file systems you’ll find mounted on high-performance computing systems.

Link

COMPLECS: High-Throughput and Many-Task Computing - Slurm Edition

How to build and run your high-throughput and many-task computing workflows on high-performance computing systems using the Slurm Workload Manager.

COMPLECS: HPC Hardware Overview

A brief introduction into what makes up a HPC system and how users should use this information. No programming required.

Link

COMPLECS: HPC Hardware Overview

A brief introduction into what makes up a HPC system and how users should use this information. No programming required.

LinkRecorded WebinarInteractive Video

COMPLECS: HPC Security and Getting Help

Discussion on best practices for using HPC systems and getting support

Recorded Webinar

COMPLECS: Interactive Computing

Interactive high-performance computing (HPC) involves real-time user inputs that result in actions being performed on HPC compute nodes. This session presents an overview of interactive computing tools and methods.

Link

COMPLECS: Interactive Computing

Interactive high-performance computing (HPC) involves real-time user inputs that result in actions being performed on HPC compute nodes. This session presents an overview of interactive computing tools and methods.

LinkRecorded WebinarInteractive VideoGitHub Repository

COMPLECS: Intermediate Linux and Shell Scripting

A survey of intermediate Linux skills for effectively using advanced cyberinfrastructure.

Link

COMPLECS: Intermediate Linux and Shell Scripting

A survey of intermediate Linux skills for effectively using advanced cyberinfrastructure.

LinkInteractive Video

COMPLECS: Intermediate Linux and Shell Scripting

A survey of intermediate Linux skills for effectively using advanced cyberinfrastructure.

Interactive Video

COMPLECS: Parallel Computing Concepts

A brief introduction to fundamental concepts in parallel computing. No programming experience needed.

Link

COMPLECS: Parallel Computing Concepts

A brief introduction to fundamental concepts in parallel computing for anyone who uses HPC resources.

Interactive Video

Data Management & File Systems

Managing data efficiently on a supercomputer is important from both users' and system's perspectives. In this webinar, we will cover a few basic data management techniques and I/O best practices in the context of the Expanse system at SDSC.

Interactive Video

Expanse 101: Accessing and Running Jobs on Expanse

This webinar covers the basics of accessing the SDSC Expanse supercomputer, managing the user environment, compiling and running jobs on Expanse.

Interactive Video

Expanse Webinar: Accessing and Running Jobs on Expanse

This webinar covers the basics of accessing SDSC's Expanse supercomputer, managing the user environment, compiling and running jobs using Slurm, where to run them, and how to run batch jobs. We will also cover interactive computing using applications such as Jupyter Notebooks and how to run them via the command line or from the Expanse portal. It is assumed that you have mastered the basic skills of logging onto HPC systems using SSH and running basic Unix commands on these systems.

Interactive Video

Expanse Webinar: Accessing and Running Jobs on Expanse

This webinar covers the basics of accessing SDSC's Expanse supercomputer, managing the user environment, compiling and running jobs using Slurm, where to run them, and how to run batch jobs.

Interactive Video

Expanse Webinar: An Introduction to Singularity: Containers for Scientific and High-Performance Computing

Come learn about Singularity containers and how you might use them in your own work.

Interactive Video

Expanse Webinar: Composable Systems in Expanse

This webinar will present the approach and the architecture of the composable systems component of Expanse. We will also summarize scientific case studies that demonstrate the application of this new infrastructure and its federation with Nautilus, a Kubernetes-based GPU geo-distributed cluster.

Interactive Video

Expanse Webinar: Data Management & File Systems on Expanse

Managing data efficiently on a supercomputer is very important from both users' and system's perspectives. In this webinar, we will cover some of the basic data management techniques, I/O best practices in the context of the Expanse system at SDSC.

Interactive Video

Expanse Webinar: Enduring Security: The Journey Continues

The first in a recurring webinar series on using Expanse and other SDSC HPC resources securely. This webinar will cover security and security-related topics relevant to researchers and the trustworthiness of their work produced on these resources.

Interactive Video

Expanse Webinar: GPU Computing and Programming on Expanse

This webinar will give a brief introduction to GPU computing and programming on Expanse. We will cover the GPU architecture, programming with the NVIDIA HPC SDK via libraries, OpenACC compiler directives, CUDA, profiling and debugging, and submitting GPU enabled jobs on Expanse.

Interactive Video

Expanse Webinar: How-to secure your Jupyter notebook sessions on Expanse

Come learn how to launch your Jupyter notebook sessions on Expanse in a simple, secure way.

Interactive Video

Expanse Webinar: Introduction to Neural Networks, Convolution Neural Networks and Deep Learning on Expanse

This webinar will be a quick introduction and overview of neural networks, convolution networks, and deep learning on Expanse.

Interactive Video

Expanse Webinar: Parallel Computing Concepts

In this webinar we cover supercomputer architectures, the differences between threads and processes, implementations of parallelism (e.g., OpenMP and MPI), strong and weak scaling, limitations on scalability (Amdahl’s and Gustafson’s Laws) and benchmarking.

Interactive Video

Expanse Webinar: Parallel Computing Concepts

In this webinar we cover supercomputer architectures, the differences between threads and processes, implementations of parallelism (e.g., OpenMP and MPI), strong and weak scaling, limitations on scalability (Amdahl’s and Gustafson’s Laws) and benchmarking.

Interactive Video

Expanse Webinar: Performance Tuning and Single Processor Optimization

Presentation will cover cache-level optimizations and other techniques for achieving optimal software performance. We will also cover AMD specific compiler options, libraries and performance tools.

Interactive Video

Expanse Webinar: Running Jupyter Notebooks on Expanse

In this webinar, we will present SDSC’s multitiered approach to running notebooks more securely: hosting Jupyter services on Expanse using SSH Tunneling or using the SDSC Jupyter Reverse Proxy Service (JRPS), which connects the user over an HTTPS connection. The JRPS will launch a batch script that creates a securely hosted HTTPS access point for the user, resulting in a safer, more secure notebook environment.

Interactive Video

Expanse Webinar: Run your Jupyter Notebooks anywhere: Scaling up your Projects from Laptop to Expanse

In this webinar we demonstrate how to transition your Jupyter Notebooks from a local machine to the Expanse HPC system using command-line tools and the Expanse Portal. We cover creating transferable software environments, scaling up calculations to large datasets, parallel processing, and running Jupyter Notebooks in batch mode.

Interactive Video

Expanse Webinar: Scientific Computing with Kubernetes

In this webinar we provide recipes for transitioning scientific workloads that currently run on traditional batch systems to Kubernetes systems. Kubernetes is batch-like in nature, but there are some differences that science users should be aware of. We will also briefly describe capabilities that are not found in traditional batch systems that can improve the effectiveness of scientific computing.

Interactive Video

Expanse Webinar: Singularity – Containers for Scientific and High-Performance Computing

Come learn all about Singularity containers. In this webinar, we'll provide an overview of Singularity and how you might incorporate the use of containers in your own research. We'll also show you how to access and use some of the containerized applications that we make available to users on Expanse at SDSC.

Interactive Video

FABRIC - KNIT 8 Workshop

KNIT 8, the next FABRIC Community Workshop, will take place March 19-21, 2024 in San Diego, CA. KNIT 8 will be hosted by the San Diego Supercomputer Center and co-located with the Fifth National Research Platform (5NRP) Workshop. It will be the first workshop since FABRIC has entered full operations.

Fall 2023: Quarterly Mini-Workshop on HPC CI Onboarding for the UCSD Research Community

The mini-workshop is designed to provide the UCSD research community with a streamlined pathway to swiftly engage with the Expanse cluster for their scientific endeavors. Collaboratively organized by Research IT and SDSC, this workshop series offers participants the opportunity to start using the Expanse cluster through the Campus Champions allocation, while benefiting from comprehensive training resources and expert guidance provided by SDSC.

Interactive Video

Getting Started with Batch Job Scheduling: Slurm Edition

Learn how to write your first batch job script and submit it to a Slurm batch job scheduler. We discuss best practices on how to structure your batch job scripts, teach you how to leverage Slurm environment variables, and provide you with some tips on how to request resources from the scheduler to get your work done faster. We also introduce you to some advanced features like Slurm job arrays and job dependencies for more structured computational workflows.

Interactive Video

Globus World Tour

This hands-on Globus workshop is tailored for researchers, Research Software Engineers (RSEs), research computing facilitators, system administrators, and developers invested in enhancing their research data management skills.

Event Website

GPU Computing and Programming on Expanse

This webinar gives a brief introduction to GPU computing and programming on Expanse. We will cover the GPU architecture, programming with the Nvidia CUDA Toolkit and HPC SDK via libraries, OpenACC compiler directives, and CUDA, and submitting GPU enabled jobs on Expanse.

Interactive Video

GPU Hackathon

** Application Deadline is March 4, 2021** This event begins with a preparation day on May 4 followed by the GPU Hackathon running May 11 - 13. GPU Hackathons provide exciting opportunities for scientists to accelerate their AI research or HPC codes under the guidance of expert mentors from National Labs, Universities and Industry leaders in a collaborative environment. The SDSC Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

HPC and Data Science Summer Institute 2022

The Application Deadline has passed. The HPC and Data Science Summer Institute is a week-long workshop focusing on a broad spectrum of introductory-to-intermediate topics in High Performance Computing and Data Science. The program is aimed at researchers in academia and industry, especially in domains not traditionally engaged in supercomputing, who have problems that cannot typically be solved using local computing resources.

Interactive VideosGitHub Repository

Implementing Research Data Management for Labs & Grants

Implement a practical and well supported data management plan for your research lab, project or grant with SeedMeLab.

Webinar Recording

Interactive Computing on High Performance Computing Resources

Interactive computing includes commonly used programs, such as word processors or spreadsheet applications running user devices (mobile phones, laptops). Interactive high-performance computing (HPC) involves real-time user inputs that result in actions being performed on HPC compute nodes. In this session we’ll present an overview of interactive computing tools and methods.

Interactive Video

Introduction to Neural Networks, Convolution Neural Networks and Deep Learning

This webinar will be a quick introduction and overview of neural networks, convolution networks, and demonstration of executing deep learning models in an HPC environment.

Interactive Video

Introduction to Singularity: Creating and Running Containers for High-Performance Computing

In this webinar, Yuwu Chen from TSCC User Services will show how to build Singularity images and then run them on the SDSC supercomputer clusters such as TSCC. Yuwu will also be sharing his insider knowledge of best practices along with pitfalls to avoid while working with Singularity.

Interactive Video

Introduction to TSCC 2.0

This training will cover everything users need to know about the new TSCC 2.0 system that will be launched in phases starting late spring. Topics will include changes to the TSCC system, scheduler, queues, software stack, accounting, and policies for using TSCC.

Recording

Kubernetes for Science Compute

Several new scientific compute resources are becoming available only through Kubernetes and their users will have to adapt their workloads to interface to it. This tutorial provides the basic Kubernetes notions any science user will need, paired with extensive hands-on exercises on a production-quality system to better explore the details. 

RepositoryVideo

Lustre User Group Conference (LUG23)

The annual Lustre User Group conference is the high performance computing industry’s primary venue for discussion on the open source Lustre file system and other technologies. The conference focuses on the latest Lustre developments and allows attendees to network with peers.

Neuroscience Gateway Workshop

The Neuroscience Gateway (NSG), a free and open platform, eliminates administrative and technical barriers and enables neuroscientists to do large scale modeling and data processing using various tools on supercomputers. NSG is also a platform for dissemination of neuroscience software.

NSF Convergence Accelerator Workshop: Societal Shock Resilience

This 3-day workshop (June 7, 8, and 11), supported by the NSF Convergence Acceleration program, brings together a multi-hazard trans-disciplinary community to discuss and revise a proposed ‘Societal Shock Resilience Framework’.

Office of Rep. Peters' Hackathon 2023

The Office of Congressman Scott Peters, along with SDSC, will host a hackathon for students in district 50. Hackathon participants will have the opportunity to learn more about the Congressional App Challenge and ask our panel of experts questions. Students will also be introduced to programming in Jupyter Notebook. In addition, students can tour the Data Center and learn about supercomputers.

Link

Office of Rep. Peters' Hackathon 2023

The Office of Congressman Scott Peters, along with SDSC, will host a hackathon for students in district 50. Hackathon participants will have the opportunity to learn more about the Congressional App Challenge and ask our panel of experts questions. Students will also be introduced to programming in Jupyter Notebook. In addition, students can tour the Data Center and learn about supercomputers.

Link

On-Ramp: Learn market basics for academic projects

[Invitation Only] Explore revenue generating opportunities to take control of the trajectory/future of your software solution with the Rev-Up Program! Learn more about the program and engage with sustainability experts to see if this is the right fit for you at On-Ramp, October 26 from 11 am - 1 pm PT.

Parallel and GPU Computing with MATLAB

In this session you will learn how to solve and accelerate computationally and data-intensive problems that are becoming common in the areas of machine learning and deep learning using multicore processors, GPUs, and computer clusters.

RepositoryInteractive Video

Parallel Computing Concepts

In this webinar we cover supercomputer architectures, the differences between threads and processes, implementations of parallelism (e.g., OpenMP and MPI), strong and weak scaling, limitations on scalability (Amdahl’s and Gustafson’s Laws) and benchmarking.

Interactive Video

Performance Tuning and Optimization

This session is intended for attendees who do their own code development and need their calculations to finish as quickly as possible. We cover effective use of cache, loop-level optimizations, and other topics for writing and building optimal code.

Interactive Video

Rev-up's On-Ramp: Learn market basics for academic projects

On-Ramp is a short but intense course that will drive your team to evaluate revenue generation with a view toward parallel or replacement funding streams. Taking place from May 3-6, 2022, On-Ramp is open for all leaders and team members of San Diego Supercomputer Center projects.

Rich Data Sharing for HPC Users

This free webinar will introduce HPCShare, a web-based resource for users of SDSC’s high-performance computing resources, including Expanse, to easily share small-to medium-scale datasets in an efficient and organized manner. Attendees will learn about using HPCShare and SDSC’s SeedMeLab scientific data management system. Hosted by SDSC Visualization Group Lead Amit Chourasia.

Interactive Video

Run your Jupyter Notebooks anywhere: Scaling up your Projects from your Laptop

In this webinar, we demonstrate how to transition your Jupyter Notebooks from a local machine to the Expanse HPC system using command-line tools and the Expanse Portal. We cover creating transferable software environments, scaling up calculations to large datasets, parallel processing, and running Jupyter Notebooks in batch mode.

Interactive Video

Scientific Computing with Kubernetes

In this webinar we provide recipes for transitioning scientific workloads that currently run on traditional batch systems to Kubernetes systems. Kubernetes is batch-like in nature, but there are some differences that science users should be aware of. We will also briefly describe capabilities that are not found in traditional batch systems that can improve the effectiveness of scientific computing.

Interactive Video

SDSC's Annual High Performance Computing and Data Science Summer Institute

*Application deadline: Sunday, May 16 (EXTENDED)* This year’s Summer Institute continues SDSC’s strategy of bringing HPC to the “long tail of science,” i.e., providing resources to a larger number of modest-sized computational research projects that represent, in aggregate, a tremendous amount of scientific research and discovery.

Interactive VideoGitHub Repository

SDSC's Annual High Performance Computing and Data Science Summer Institute - We're going Virtual!

This year’s Summer Institute continues SDSC’s strategy of bringing HPC to the ”long tail of science”, i.e. providing resources to a larger number of modest-sized computational research projects that represent, in aggregate, a tremendous amount of scientific research and discovery. The application period is now closed.

SDSC's Cyberinfrastructure-Enabled Machine Learning Summer Institute

*APPLICATION CLOSED* The CIML Summer Institute will involve introducing ML researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources, such as laptops and workstations, to large-scale HPC systems, such as the SDSC Expanse supercomputer.

Interactive Video

SDSC GPU Hackathon

Application deadline is March 20, 2022. The SDSC GPU Hackathon is a multi-day (May 3 + May 10-12), intensive hands-on event, designed to help computational scientists and researchers port, optimize, and accelerate their applications using GPUs. These events pair participants with dedicated mentors experienced in GPU programming and development. Representing distinguished scholars and preeminent institutions from around the world, the teams of mentors and attendees work together to realize performance gains and speedups by taking advantage of parallel programming on GPUs.

SDSC’s HPC/CI Training Series

SDSC’s High Performance Computing (HPC)/ Cyberinfrastructure (CI) Training Series was developed to support UC San Diego undergraduates and graduates interested in furthering their knowledge of HPC concepts and hands-on training, as well as, building a team interested in competing in the Student Cluster Competition held at the annual International Conference for High Performance Computing, Networking, Storage, and Analysis (SC). This program is available to any who are interested in advancing their knowledge and experience on HPC systems and concepts.

GitHub Repository

SGX3 Blueprint Factories Webinar

SGX3 offers a new service called Blueprint Factories, in which we will work with collaborators to better understand the CI needs of entire research communities and national-scale cyberinfrastructure providers. Two factories have launched, the AI and Science Gateways Blueprint Factory and Sustainability Blueprint Factory. Hear from SGX3 Director, Mike Zentner, as he shares the status of the first two Blueprint Factories.

SGX3 Webinar: Reproducibility of Computational Research – A Community of Practice

Computational science is increasingly becoming the cornerstone of groundbreaking research across disciplines. However, the issue of reproducibility remains a major concern that calls for a collaborative, community-driven approach.

Singularity Containers

This webinar will briefly introduce how to build Singularity images and how to run them on the SDSC supercomputer clusters. We will also share some insider knowledge of best practices and pitfalls to avoid while working with Singularity.

Interactive Video

Student Cluster Competition at SC22

Application deadline March 23. The SDSC and UC San Diego are putting together a team to compete in the Student Cluster Competition (SCC), held at the annual Supercomputing conference SC22 in Dallas, TX, November 14 – 16. The SCC teams consist of a mentor plus 6 students who will design and build a small cluster with support from mentors and hardware and software vendor partners. They will learn designated scientific applications and apply optimization techniques for their chosen architectures. SCC teams compete in a non-stop 48-hour challenge to complete a real-world scientific workload, while keeping the cluster up and running, and demonstrating to the judges their HPC skills and knowledge.

Summer 2024 SGX3 Internship Experience

The SGX3's Workforce Development team will offer a summer program aimed at graduate students. Eligible applicants include graduate students majoring in computer science or computer engineering (or related fields). The student will be funded by SGX3 to join the TACC science gateway team for the summer, working on live, impactful gateways. Each student will also receive impactful mentorship from TACC researchers and opportunities to grow their leadership skills. Apply before January 31, 2024.

Technology Forum: Expanse Supercomputer for Industry

SDSC's newest Supercomputer, Expanse, supports SDSC's vision of 'Computing without Boundaries' by increasing the capacity and performance for thousands of users of batch-oriented and science gateway computing, and by providing new capabilities that will enable research increasingly dependent upon heterogeneous and distributed resources composed into integrated and highly usable cyberinfrastructure. It also implements new technical capabilities such as Direct Liquid Cooling. SDSC has acquired additional capacity for Expanse specifically to support industrial research and collaborations.

Webinar Recording

Technology Forum: Heterogeneous Computing and Composable Architectures with Next-Gen Interconnects

Join Alan Benjamin, CIO, GigaIO, as he addresses the need for a composable infrastructure for heterogeneous computing, with the ability to balance CPU-to-GPU compute ratios, create systems with different types of GPUs, achieve optimal GPU-to-GPU and GPU-to-storage communications, and the ability to scale solutions spanning multiple GPU appliances. Learn how GigaIO has developed a next generation interconnect fabric based on PCIe and CXL called FabreX to address these challenges.

Technology Forum: Increasing the Impact of High Resolution Topography Data with OpenTopography

The cyberinfrastructure platform enables users to efficiently discover, access and process massive volumes of data. OpenTopography also increases the impact of investments in the collection of data and catalyzes scientific discovery. Join us to learn more about the motivations, technology and data assets behind the National Science Foundation (NSF) funded OpenTopography platform.

Technology Forum: SDSC Voyager – An Innovative Resource for AI & Machine Learning

Join us to learn about SDSC’s most recent supercomputer award, the Voyager system. With an innovative system architecture uniquely optimized for deep learning (DL) operations and AI workloads, Voyager will provide an opportunity for researchers to explore and implement new deep learning techniques.

Press Release

Technology Forum with Graphcore: Exploiting Parallelism in Large Scale Deep Learning Model Training: From Chips to Systems to Algorithms

Attend this session to learn about how Graphcore aims to address scale challenges associated with training large models. Get to know our Intelligent Processing Unit (IPU) – a purpose-built hardware accelerator with a unique MIMD architecture – designed to address the most demanding compute and memory bandwidth needs of modern ML models. Our network disaggregated architecture uniquely positions us to build highly scalable systems (IPU-PODs) with thousands of accelerators aimed at exploiting various dimensions of parallelism.

Technology Forum with Janssen: Leveraging High-Performance Computing and Cloud Environments for the Analysis of Biobank-Scale Datasets

Some of the most exciting biological datasets in recent years have originated from large-scale biobanking efforts. These high-dimensional datasets include electronic medical records, imaging, and genomic profiles from hundreds of thousands of individuals, significantly increasing the power to understand the risk factors and genetic basis of disease. This seminar will outline some of the common architecture considerations for working with biobank data on-prem and in the cloud.

Triton Shared Computing Cluster (TSCC) 101 Spring Training

This training will cover everything new users need to know about using the TSCC system. Topics will include: an overview of condo/hotel program; how to apply; accounts and allocation usage monitoring; environment and software modules; overview of various queues, building PBS job scripts, job submission and monitoring; data transfers; and file systems.

RecordingGitHub Content

TSCC 1.0 to 2.0 Transitional Workshop

During this workshop, we will provide an overview of TSCC 2.0 including the new authentication method, new allocation system, new filesystems, shared data transfer options from the current TSCC to TSCC 2.0, software stack, new partition characteristics, and provide examples of SLURM job scripts.

Recording

TSCC 101: Accessing and Running Jobs on TSCC

During this workshop, we will provide an overview of TSCC, including authentication, allocation, filesystems, software stack, partition characteristics, and job submission, with examples of SLURM job scripts.

Turn Your Data Portal into a Science Gateway With Globus Compute Webinar

The modern research data portal (MRDP) design pattern is used extensively in research environments to ensure security of an application while facilitating fast data transfer across the wide area network. We have adopted this design pattern to implement a science gateway that enables data search, remote computation, and publication of resulting data products for downstream discovery. We will demonstrate this reference implementation and describe how it may be used by science gateway developers to jumpstart their own efforts.

UC Love Data Week: FAIR and ML, AI Readiness, and AI Reproducibility (FARR)

This session will introduce FAIR and ML, AI Readiness with a focus on the role of institutions and data repositories, and AI reproducibility.

Using Python and Jupyter Notebooks on TSCC

This workshop will focus on providing guidelines for setting up customized Python environments, how to install and manage packages using Miniconda/pip, and how to run secure Jupyter notebooks on Triton Shared Computing Cluster (TSCC) HPC system.

Github RepositoryInteractive Videos

Voyager Part 1: Introduction and User Environment

This is the first of a two-part Voyager training session. Voyager is based on Intel’s Habana Lab AI processors and provides a unique opportunity to use AI focused hardware for exploring AI in science and engineering. Voyager features Habana’s Gaudi processors optimized for training, Goya processors optimized for inference, 100 GbE all-to-all connection within Gaudi nodes, 24 x 100GbE RDMA RoCE for scale-out across Gaudi nodes, and a Ceph file system.

Recording

Voyager Part 2: Habana Architecture Deep Dive and Porting of TensorFlow and PyTorch Applications

This is the second of a two-part Voyager training session. Voyager is based on Intel’s Habana Lab AI processors and provides a unique opportunity to use AI focused hardware for exploring AI in science and engineering. Voyager features Habana’s Gaudi processors optimized for training, Goya processors optimized for inference, 100 GbE all-to-all connection within Gaudi nodes, 24 x 100GbE RDMA RoCE for scale-out across Gaudi nodes, and a Ceph file system.

Recording

WHPC@SDSC Speaker Series with Nancy Wilkins-Diehr

We cordially invite you to join the Women in HPC @ SDSC’s speaker series - with our first presenter, Nancy Wilkens-Diehr! Nancy Wilkins-Diehr led programs at SDSC, bringing the lens towards computing and services to better support the multitude of scientific disciplines applying their challenges to high-performance computing and science gateways. Nancy led projects, including the Science Gateways Community Institute (SGCI), XSEDE’s Extended Collaborative Support Services (ECSS), and the National Partnership for Advanced Computational Infrastructure (NPACI).

Back to top