From Beocat
Jump to: navigation, search
(Created page with "== XSEDE Resources == We have free access to many remote supercomputing systems through the XSEDE portal (eXtreme Science and Engineering Discovery Environment). Each year w...")
 
No edit summary
 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== XSEDE Resources ==
== Remote Resources through the ACCESS Program ==


We have free access to many remote supercomputing systems through the  
We have free access to many remote supercomputing systems through the  
XSEDE portal (eXtreme Science and Engineering Discovery Environment).
<abbr title="Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support">ACCESS</abbr> program.
Each year we get allocations through our campus champions account that
Each year we get allocations through our campus champions account that
users can access to supplement the local resources we have on Beocat.
users can access to supplement the local resources we have on Beocat.
Line 11: Line 11:
allocation for longer term access to greater resources.  Below is a list of
allocation for longer term access to greater resources.  Below is a list of
the supercomputers we have access to, their configurations, and the  
the supercomputers we have access to, their configurations, and the  
allocations we currently have on each.
allocations we currently have on each.  Each of these systems is similar to
Beocat in that they run the CentOS operating system and Slurm scheduler and
have a similar module system.  To get access to these resources, create an
account at the ACCESS website http://access-ci.org/ then contact
[mailto:dan@ksu.edu Dr. Dan Andresen] or [mailto:daveturner@ksu.edu Dr. Dave Turner].


=== Bridges2 at the Pittsburgh Supercomputing Center ===
=== Bridges2 at the Pittsburgh Supercomputing Center ===


'''488 RM regular Memory compute nodes'''
Our current allocation is 37,000 SUs (core-hours) on the regular compute nodes,
500 SUs on the GPU nodes, and 8,000 SUs on the large 4 TB nodes.
 
'''488 RM regular Memory compute nodes''' allocated 37,266 SUs (core-hours)
* 2 x AMD EPYC 7742 --> 128 cores
* 2 x AMD EPYC 7742 --> 128 cores
* 256 GB memory (16 more nodes have 512 GB each)
* 256 GB memory (16 more nodes have 512 GB each)
Line 21: Line 28:
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network


'''4 EM Extreme Memory compute nodes'''
'''4 EM Extreme Memory compute nodes''' allocated 8,068 SUs (core-hours)
* 4 x Intel Cascade 8260M --> 96 cores
* 4 x Intel Cascade 8260M --> 96 cores
* 4 TB memory
* 4 TB memory
Line 27: Line 34:
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network


'''24 GPU compute nodes'''
'''24 GPU compute nodes''' allocated 500 SUs (gpu-hours)
* 2 x Intel Gold Cascade 6248 --> 40 cores
* 2 x Intel Gold Cascade 6248 --> 40 cores
* 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
* 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
Line 33: Line 40:
* 7.68 TB NVMe SSD
* 7.68 TB NVMe SSD
* Mellanox HDR 200 Gbps network
* Mellanox HDR 200 Gbps network
=== Expanse at San Diego Supercomputing Center ===
All nodes are shared so you request the cores you need, not whole nodes.
Our current allocation is 38,000 SUs (core-hours) on the regular compute nodes
and 500 SUs on the GPU nodes.  We do not have any time on the large 2 TB nodes,
and since there are only 4 of these they may be difficult to access.
'''728 compute nodes'''
* 2 x AMD EPYC 7742 --> 128 cores
* 256 GB memory
'''4 Large Memory nodes'''
* 4 x Intel Cascade 8260M --> 96 cores
* 2 TB memory
'''52 GPU nodes'''
* 2 x Intel Xeon 6248 --> 40 cores
* 4 x nVidia Tesla v100 GPU cards
* 384 GB memory
'''Cluster-wide capabilities'''
* 12 PetaByte Lustre file system
* 7 PetaByte CEPH object store
* 56 Gbps bi-directional HDR InfiniBand network
== Frontera at TACC ==
We have access to Frontera which is currently the 10th fastest supercomputer in the world.
Our allocation is 2,000 node-hours on the regular compute nodes, and 500 node-hours
on the GPU nodes.
'''8,368 compute nodes'''
* 2 x Intel Xeon Platinum 8280 --> 56 cores
* 192 GB memory
'''16 Large Memory nodes'''
* 4 x Intel Xeon Platinum 8280 --> 112 cores
* 2.1 TB Optane memory
'''90 GPU nodes'''
* 2 x Intel Xeon 6248 --> 40 cores
* 4 x nVidia Quadro RTX 5000 GPUs
* 128 GB memory

Latest revision as of 13:15, 14 August 2024

Remote Resources through the ACCESS Program

We have free access to many remote supercomputing systems through the ACCESS program. Each year we get allocations through our campus champions account that users can access to supplement the local resources we have on Beocat. Many of these systems provide capabilities that Beocat does not, such as the 4 TB memory compute nodes on Bridges2, and the large number of 64-bit GPU nodes on many systems, as well as Matlab licenses. While we have some allocations to share, each user may also put in for a personal or group allocation for longer term access to greater resources. Below is a list of the supercomputers we have access to, their configurations, and the allocations we currently have on each. Each of these systems is similar to Beocat in that they run the CentOS operating system and Slurm scheduler and have a similar module system. To get access to these resources, create an account at the ACCESS website http://access-ci.org/ then contact Dr. Dan Andresen or Dr. Dave Turner.

Bridges2 at the Pittsburgh Supercomputing Center

Our current allocation is 37,000 SUs (core-hours) on the regular compute nodes, 500 SUs on the GPU nodes, and 8,000 SUs on the large 4 TB nodes.

488 RM regular Memory compute nodes allocated 37,266 SUs (core-hours)

  • 2 x AMD EPYC 7742 --> 128 cores
  • 256 GB memory (16 more nodes have 512 GB each)
  • 3.84 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

4 EM Extreme Memory compute nodes allocated 8,068 SUs (core-hours)

  • 4 x Intel Cascade 8260M --> 96 cores
  • 4 TB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

24 GPU compute nodes allocated 500 SUs (gpu-hours)

  • 2 x Intel Gold Cascade 6248 --> 40 cores
  • 8 x nVidia Tesla v100 32 GB sxm2 GPU cards
  • 512 GB memory
  • 7.68 TB NVMe SSD
  • Mellanox HDR 200 Gbps network

Expanse at San Diego Supercomputing Center

All nodes are shared so you request the cores you need, not whole nodes. Our current allocation is 38,000 SUs (core-hours) on the regular compute nodes and 500 SUs on the GPU nodes. We do not have any time on the large 2 TB nodes, and since there are only 4 of these they may be difficult to access.

728 compute nodes

  • 2 x AMD EPYC 7742 --> 128 cores
  • 256 GB memory

4 Large Memory nodes

  • 4 x Intel Cascade 8260M --> 96 cores
  • 2 TB memory

52 GPU nodes

  • 2 x Intel Xeon 6248 --> 40 cores
  • 4 x nVidia Tesla v100 GPU cards
  • 384 GB memory

Cluster-wide capabilities

  • 12 PetaByte Lustre file system
  • 7 PetaByte CEPH object store
  • 56 Gbps bi-directional HDR InfiniBand network

Frontera at TACC

We have access to Frontera which is currently the 10th fastest supercomputer in the world. Our allocation is 2,000 node-hours on the regular compute nodes, and 500 node-hours on the GPU nodes.

8,368 compute nodes

  • 2 x Intel Xeon Platinum 8280 --> 56 cores
  • 192 GB memory

16 Large Memory nodes

  • 4 x Intel Xeon Platinum 8280 --> 112 cores
  • 2.1 TB Optane memory

90 GPU nodes

  • 2 x Intel Xeon 6248 --> 40 cores
  • 4 x nVidia Quadro RTX 5000 GPUs
  • 128 GB memory