Usage Guidelines

Logging into and using the Discovery Cluster ( and Information Technology Services (ITS) resources in it implies that you have agreed to and will abide by the usage guidelines below. The HPC Cluster operates under policies approved by The Advisory Research Computing Committee (RCC). There are two levels of access to the HPC Resource: member access and guest access. Member access includes all faculty, staff, students, and researchers employed by Northeastern University (NU) that are approved for using these resources by RCC. A NU faculty member may sponsor individual accounts for himself/herself and his/her students, graduate students, research staff, postdocs, and visiting scholars. The faculty member should be the person to actually submit the application for the sponsored accounts. Guest access is temporary and is approved by the RCC on a case by case basis. It is usually limited to a week of access on the cluster unless otherwise indicated. Application for member or guest access is made in the same way – using the form available in the Request an account on Discovery Cluster tab.

Usage Policy/Guidelines:

  1. All usage of the NU HPC Resource is provided through individual user accounts and requires completion of an application form and approval by RCC. Each account request must be sponsored by a regular NU faculty member who will be added to various HPC and RCC mailing lists, will be the contact person related to their research group’s use of the NU HPC Resource, and will be responsible for that use.
  2. Users will use only those computing and ITS resources and data for which you have authorization and only in the manner and to the extent authorized.
  3. Users will use computing and information technology resources only for their intended purpose.
  4. Users will protect the confidentiality, availability, and integrity of computing and information technology resources, including data.
  5. Users will abide by applicable laws and University policies and all applicable contracts and licenses and respect the copyright and intellectual property rights of others, including the legal use of copyrighted material.
  6. Users will respect the finite capacity of resources and limit use so as not to consume an unreasonable amount of resources or to interfere unreasonably with the activity of others.
  7. Users will respect the privacy and personal rights of others.
  8. The disk space on Discovery cluster is for use with the cluster only. Do not use it to store non-cluster-related files.
  9. The disk array on Discovery cluster is NOT backed up. Any important files should be copied elsewhere.
  10. Programs run on the cluster must be submitted through the SLURM scheduler partitions as described in the Submitting Jobs on Discovery Cluster tab.
  11. The login nodes are not to be used for any interactive work or jobs. Obtain interactive nodes as described in the  Submitting Jobs on Discovery Cluster tab.
  12. All home directories have a 20GB limit. You can request additional space but there is a nominal usage charge above this limit. Use the Contacting Research Computing Staff tab to send in your request. Please note that the hard limit set on every home directory is 30GB. However it is the users responsibility to keep the usage at 20GB. If you run jobs that go over the hard limit your jobs will hang or will get killed automatically and you may not even be able to login to your account. So make sure you do not exceed 20GB. Use the command “du -hs /home/<your_user_login_id>” to check your home directory usage before submitting long running jobs.
  13. After your job completes please remove your output files from the /scratch/<your_user_name> folder. If you do not do this you may have these files deleted. More information on file staging and using /scratch effectively for serial and parallel jobs is in the Submitting Jobs on Discovery Cluster tab.
  14. Your home directory (/home/<your_user_name>) should not be used for file staging.
  15. DO NOT ABUSE the NFS mount /home. Use temporary storage space on cluster nodes or the 1.1PB GSS-GPFS Parallel File System common cluster mount /scratch (/gss_gpfs_scratch). The most common disastrous mistake made on the cluster, by both new and experienced users, involves an excessive number of data reads or writes across NFS on the cluster. We strongly advise that you move any data you need to manipulate to the cluster-node’s file system (/tmp — in a directory named after your user name), do any reads and writes you need for the job locally to the node (not via NFS), then transfer the needed data back to your home directory. You can also use the 1.1PB GSS-GPFS Parallel File System common mount /scratch (/gss_gpfs_scratch) instead of local /tmp. Make sure you clean up your temporary data by removing the directory you created in /tmp. This is described in more detail in the Submitting Jobs on Discovery Cluster tab.
  16. If you do not know what you are doing contact us first. NOTE excessive use of NFS I/O is liable to bring the cluster to a halt. Such jobs will be in violation of the compliance requirement. The owners of these jobs will have the account login at least temporarily suspended and all their jobs will be terminated.
  17. All jobs will terminate after a WALL CLOCK time of 24 hours so please checkpoint your jobs or partition them to run within the allotted time limit. If you need more time contact us and RCC will review your request.
  18. All accounts are monitored for inactivity and are liable to be removed after 20 weeks of inactivity. Please make sure that if you want your account to remain active you must login once every 20 weeks. ITS is not responsible for data lost if your account is removed.
  19. Periodically, the users of the Discovery Cluster will be sent an administrative email asking if they wish to continue using the Discovery Cluster. As the email will indicate, you must respond to these warning emails or your account will be removed after 20 weeks of inactivity.
  20. “/scratch (/gss_gpfs_scratch)” does not have quotas enabled. Consistently using over ~1TB of scratch space for two or more weeks at a time is not permitted. Since this space is limited to 1.1PB and we have more than 500 users please make sure you “rsync” or “sFTP” your data elsewhere. /scratch space is exclusively for input data and output data during computational runs. Not for long term file storage. You must stage your files here before a run and remove them after it. This should be integrated in your “interactive” or “batch” job scripts. You will be sent an email if you consistently use /scratch beyond the limits indicated. If you need storage on the cluster you can purchase it. Rates start at $300 per TB per year if you want to store your data long term.

INTERACTIVE USAGE using INTERACTIVE NODES checked out using SLURM scheduler is defined as: Any work that either requires a graphical user interface (for input or output) or *requires* a user to interact with the program while it is running. Tasks that commonly fall in this category include file editing, software compilation, web browsing and data visualization. It is worth noting that many graphical applications have a non-graphical mode.

BATCH USAGE using the other queues for JOB submission via SLURM scheduler is defined as: Any work that does NOT require user interaction or graphical input/output. This is the method by which most work will be performed on the cluster.

Failure to comply with the appropriate use of ITS computing resources will result in your account being disabled, and reporting the non-compliance to higher authorities that subjects the user(s) to disciplinary action.