Summary

Research Computing

Discovery
Cluster Configuration Summary                                                                      
(October 13th 2016)

– 2 Login Nodes → Each with dual Intel Xeon CPU E5-2670 @2.60 GHz, 264GB RAM – 32 logical cores

 

-2 Administration Nodes → Each with dual Intel Xeon CPU E5-2670@ 2.60 GHz, 264GB RAM – 32 logical cores

 

– 64 compute nodes → Each with dual Intel Xeon CPU E5-2650@ 2.00 GHz, 128GB RAM, 10Gbps TCP/IP network backplane – 16 physical cores

 

  64 compute nodes → Each with dual Intel Xeon CPU E5-2650
@ 2.00 GHz, 128GB RAM, 10Gbps TCP/IP network backplane, 56Gbps FDR Infiniband IPOIB network backplane, RDMA FDR Infiniband network backplane

– 16 physical core

– 30 compute nodes → Each with dual Intel Xeon CPU E5-2680 v2 @ 2.8GHz, 64GB RAM, 10Gbps TCP/IP network backplane

– 40 logical cores

– 48 compute nodes → Each with dual Intel Xeon
CPU E5-2680 v2 @ 2.8GHz, 128GB RAM, 10Gbps
TCP/IP network backplane

– 40 logical cores

– 4 large memory compute nodes → Each with dual Intel Xeon CPU E5-2670@ 2.60 GHz, 384GB RAM, 2TB local
swap space, dual bonded 10Gbps network backplane – 32 logical cores

 – 3 hadoop data compute nodes → Each with dual Intel Xeon CPU E5-2650@ 2.00 GHz, 128GB RAM, 10Gbps TCP/IP network backplane – 32 logical cores → these 3 data nodes
provide a 50TB HDFS Hadoop File System

– 32 GPGPU compute nodes → Each with dual Intel Xeon CPU E5-2650 @ 2.00 GHz, 128GB RAM, 10Gbps TCP/IP network backplane, and a single NVIDIA Tesla K20m GPGPU (2496
CUDA cores @ 0.71GHz, 5GB GDDR RAM)

– 32 logical cores

– 16 GPGPU compute nodes → Each with dual Intel Xeon CPU E5-2690 v3 @ 2.60GHz, 128GB RAM, 10Gbps TCP/IP network backplane, and a single NVIDIA Tesla K40m GPGPU (2880 CUDA cores @0.75 GHz, 12GB GDDR RAM) – 48 logical cores

– 184 compute nodes → Each with dual Intel Xeon CPU E5-2690
v3 @ 2.60GHz, 128GB RAM, 10Gbps TCP/IP network backplane – 48 logical cores

– 256 compute nodes → Each with dual Intel Xeon CPU E5-2680
v4 @ 2.40GHz, 256GB RAM, 10Gbps TCP/IP network backplane – 56 logical cores

 

In addition there are compute nodes owned by researchers that are part of our cluster and that have queues exclusively reserved for them. These are not listed above.

 

The storage consists of NFS v3 mounts across all the compute, login and administrative nodes: /home (5TB) with 20GB soft and 30GB hard limit per user from an Isilon OneFS based storage system.

 

For high speed large scale computing there is a 1.47 PB (usable) IBM GSS-GPFS-GNR-26 GPFS parallel file system mounted across all the compute and login nodes: /shared (133TB) for software modules, and /scratch (/gss_gpfs_scratch 1.1PB) with no quotas (but is monitored for
usage above 1TB per user on a sustained basis which then is a case of violation of usage policy unless otherwise ITS – Research Computing is informed beforehand).

 

There are also similar mounts from NFS v3 for user groups that have bought storage for their use and these mounts are restricted by group access. Pricing starts at $300/TB/year depending on whether backup is needed or not.

 

TOTAL COMPUTE CORES (excluding login, administrative nodes and nodes bought by users):
64×16+64×16+30×40+48×40+4×32+3×32+32×32+16×48+184×48 + 256×56=30352

 

You can request more information by emailing researchcomputing@neu.edu.