The Scientific Compute Platform provides WashU research faculty access to computing resources and a job scheduler that runs large-scale, parallel computing tasks with access to many CPU and GPU cores, large amounts of RAM, high-speed networks, and high-performance storage systems.
Compute Options
Get Onboarded
Complete the onboarding process and receive access to the Scientific Compute Platform.
RIS User Documentation
Looking for more information and how-tos? Our user manuals can help!
Billing & Estimations
Provides insights into computing options, associated costs, and forecasting for future expenses.
General Access
- Jobs are scheduled in a “best effort” fashion as resources are available
- General tier for student researchers and post-docs
- Faculty sponsor required
- No resource guarantees, lower priority, and job limits
Subscription
- Pre-pay for compute per month in advance and receive priority access to available resources
- Dedicated group membership to manage usage and access
- Receive increased interactive, pending, and running job limits for more flexibility and higher compute resource consumption.
- Guaranteed job scheduling SLAs
Consumption
- Pay for on-demand access to compute resources
- Dedicated group membership to manage usage and access
- Receive increased interactive, pending, and running job limits for more flexibility and higher compute resource consumption.
Compute Condo
Hardware for their labs—forming a shared “condominium” within a Host Group.
Buy-In/Condominium Model:
- Purchase servers for expansion and pay server operations fees per month.
- A dedicated queue with access controls limited by group membership.
- Priority access to general and other condo resources for bursting capacity.
- Minimum purchase requirements equal to one compute chassis with four servers.
Applications
These Applications represent a collection of research-focused applications that RIS operates or supports for customers. Each application might be considered a service unto itself. Still, we attempt to provide a common framework of abstraction and integration into existing services such that some measure of consistency is associated with all applications.
Compute1 Features
- 15,192 CPU cores
- 243 V100, A100, A40, H100 GPUs
- 300TB Scratch high-performance clustered file system
- Mellanox HDR Network
- IBM LSF Job Scheduler
- Approved for ePHI, protected or confidential data
- Docker and Module load application environments
- Command line or Web UI
- Direct access to Storage1 via the cache layer and Storage2
- Scalable architecture
- WashU Key enabled
- Access control list security group management
- Custom allocations for centers or projects
- Condo support
- Monthly billing and usage metrics
Compute2 Features
- 5,000 CPU cores
- 64 H100 GPUs
- 1PB Scratch high-performance file system
- Mellanox NDR network
- Slurm Job Scheduler
- Docker, Apptainer, and module load support
- Command line or Web UI
- Direct access to Storage1 and Storage2
- Scalable architecture
- WashU Key enabled
- Access control list security group management
- Custom allocations for centers or projects
- Condo support
- Monthly billing and usage metrics