From Beocat
Jump to: navigation, search
Line 15: Line 15:
have a similar module system.  To get access to these resources, create an
have a similar module system.  To get access to these resources, create an
account at the XSEDE portal http://xsede.org/ then contact  
account at the XSEDE portal http://xsede.org/ then contact  
Dr Dan Andreson or Dr Dave Turner.
[mailto:dan@ksu.edu Dr. Dan Andresen] or [mailto:daveturner@ksu.edu Dr. Dave Turner].


=== Bridges2 at the Pittsburgh Supercomputing Center ===
=== Bridges2 at the Pittsburgh Supercomputing Center ===

Revision as of 18:30, 20 April 2021

XSEDE Resources

We have free access to many remote supercomputing systems through the XSEDE portal (eXtreme Science and Engineering Discovery Environment). Each year we get allocations through our campus champions account that users can access to supplement the local resources we have on Beocat. Many of these systems provide capabilities that Beocat does not, such as the 4 TB memory compute nodes on Bridges2, and the large number of 64-bit GPU nodes on many systems, as well as Matlab licenses. While we have some allocations to share, each user may also put in for a personal or group allocation for longer term access to greater resources. Below is a list of the supercomputers we have access to, their configurations, and the allocations we currently have on each. Each of these systems is similar to Beocat in that they run the CentOS operating system and Slurm scheduler and have a similar module system. To get access to these resources, create an account at the XSEDE portal http://xsede.org/ then contact Dr. Dan Andresen or Dr. Dave Turner.

Bridges2 at the Pittsburgh Supercomputing Center

488 RM regular Memory compute nodes allocated 37,266 SUs (core-hours)

  • 2 x AMD EPYC 7742 --> 128 cores
  • 256 GB memory (16 more nodes have 512 GB each)
  • 3.84 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

4 EM Extreme Memory compute nodes allocated 8,068 SUs (core-hours)

  • 4 x Intel Cascade 8260M --> 96 cores
  • 4 TB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

24 GPU compute nodes allocated 500 SUs (gpu-hours)

  • 2 x Intel Gold Cascade 6248 --> 40 cores
  • 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
  • 512 GB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

Expanse at San Diego Supercomputing Center

This system is still coming on line, so we don't have any allocations on it yet.

728 compute nodes

  • 2 x AMD EPYC 7742 --> 128 cores
  • 256 GB memory

4 Large Memory nodes

  • 4 x Intel Cascade 8260M --> 96 cores
  • 2 TB memory

54 GPU nodes

  • 2 x Intel Xeon 6248 --> 40 cores
  • 4 x nVidia Tesla v100 GPU cards
  • 384 GB memory

Cluster-wide capabilities

  • 12 PetaByte Lustre file system
  • 7 PetaByte CEPH object store
  • 56 Gbps bi-directional HDR InfiniBand network

Stampede2 at Texas Advanced Computing Center

K-State has an allocation of 1600 SUs (approximately node-hours) on Stampede2. There is no sharing of nodes, so you are allocated and charged for full nodes.

4200 KNL nodes

  • Intel Xeon Phi 7250 Knights Landing processor
  • 68 cores with 4 threads per core --> 272 threads
  • 96 GB memory + 16 GB high-speed MCDRAM memory

1736 SKX compute nodes

  • 2 x Intel Xeon Platinum 8160 Skylake processors --> 48 cores
  • Hyper-threading runs 2 threads per core so it looks like 96 cores
  • 196 GB memory

The network is 100 Gbps OmniPath with a fat tree network