We are excited to announce our upcoming training series. Whether you are a beginner looking to speed up C code or an advanced user scaling AI models on RIS Compute2, we have a training designed for you.
| Available Trainings | Best For | What you’ll learn | When |
|---|---|---|---|
| Introduction to Parallel Computing (C & OpenMP) | Researchers using traditional simulation codes or those new to parallel logic. | Shared memory programming, thread management, and loop parallelization. | 📆 March 23, 2026 🕑 2:00 PM – 3:00 PM CDT |
| AI Environments: PyTorch & Container Technologies | Data scientists and AI researchers moving from local machines to the cluster. | Creating reproducible environments with Singularity/Apptainer and running PyTorch on RIS Compute2 | 📆 March 27, 2026 🕚 11:00 AM – 12:00 PM CDT |
| Scale Your AI: Multi-Node Training & Profiling | Power users training large models that need multi-node speed and efficiency. | Multi-node PyTorch (DDP), Slurm orchestration, and bottleneck analysis using NVIDIA Nsight. | 📆 March 31, 2026 🕜 1:30 PM – 2:30 PM CDT |
These trainings will take place on Zoom; you will automatically receive a calendar invite that includes the Zoom link after registration.
❓ Why Attend:
As computational demands grow, efficiency is key. These workshops provide the hands-on skills needed to:
📉 Reduce Walltime: Get your results faster.
📦 Ensure Reproducibility: Build once, run anywhere with containers.
📊 Maximize Resources: Use profiling tools to stop wasting expensive GPU hours.
Prerequisites: General familiarity with the Linux Command Line is recommended for all sessions. Specific prerequisites for each workshop are available on the registration pages.
All attendees must have an RIS Compute2 account. To get access to Compute2, please get onboarded to RIS.
We look forward to helping you push the boundaries of your research!
Have questions? Contact us at the RIS Service Desk.