From Beocat
Jump to: navigation, search
(Created page with "== XSEDE Resources == We have free access to many remote supercomputing systems through the XSEDE portal (eXtreme Science and Engineering Discovery Environment). Each year w...")
 
Line 15: Line 15:
=== Bridges2 at the Pittsburgh Supercomputing Center ===
=== Bridges2 at the Pittsburgh Supercomputing Center ===


'''488 RM regular Memory compute nodes'''
'''488 RM regular Memory compute nodes''' allocated 37,266 SUs (core-hours)
* 2 x AMD EPYC 7742 --> 128 cores
* 2 x AMD EPYC 7742 --> 128 cores
* 256 GB memory (16 more nodes have 512 GB each)
* 256 GB memory (16 more nodes have 512 GB each)
Line 21: Line 21:
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network


'''4 EM Extreme Memory compute nodes'''
'''4 EM Extreme Memory compute nodes''' allocated 8,068 SUs (core-hours)
* 4 x Intel Cascade 8260M --> 96 cores
* 4 x Intel Cascade 8260M --> 96 cores
* 4 TB memory
* 4 TB memory
Line 27: Line 27:
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network


'''24 GPU compute nodes'''
'''24 GPU compute nodes''' allocated 500 SUs (gpu-hours)
* 2 x Intel Gold Cascade 6248 --> 40 cores
* 2 x Intel Gold Cascade 6248 --> 40 cores
* 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
* 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
Line 33: Line 33:
* 7.68 TB NVMe SSD
* 7.68 TB NVMe SSD
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network
=== Expanse at San Diego Supercomputing Center ===

Revision as of 17:32, 20 April 2021

XSEDE Resources

We have free access to many remote supercomputing systems through the XSEDE portal (eXtreme Science and Engineering Discovery Environment). Each year we get allocations through our campus champions account that users can access to supplement the local resources we have on Beocat. Many of these systems provide capabilities that Beocat does not, such as the 4 TB memory compute nodes on Bridges2, and the large number of 64-bit GPU nodes on many systems, as well as Matlab licenses. While we have some allocations to share, each user may also put in for a personal or group allocation for longer term access to greater resources. Below is a list of the supercomputers we have access to, their configurations, and the allocations we currently have on each.

Bridges2 at the Pittsburgh Supercomputing Center

488 RM regular Memory compute nodes allocated 37,266 SUs (core-hours)

  • 2 x AMD EPYC 7742 --> 128 cores
  • 256 GB memory (16 more nodes have 512 GB each)
  • 3.84 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

4 EM Extreme Memory compute nodes allocated 8,068 SUs (core-hours)

  • 4 x Intel Cascade 8260M --> 96 cores
  • 4 TB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

24 GPU compute nodes allocated 500 SUs (gpu-hours)

  • 2 x Intel Gold Cascade 6248 --> 40 cores
  • 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
  • 512 GB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

Expanse at San Diego Supercomputing Center