Connecting to and using the HPC
This page describes how to connect to the High Performance Computing cluster (HPC) and how to use the features available on the login nodes.
To connect to the HPC, you must either be on-campus or be connected to the FSU VPN.
There are two ways to use the HPC:
- You can use any SSH client, including built-in Windows, Mac, and Linux terminal applications.
- Or, you can access the HPC through our web-based Open OnDemand portal from any modern web browser.
After you're connected#
Once you authenticate, you will be connected to one of our HPC Login Nodes. When you are first logged in, your path will be in your home directory. Your home directory disk quota is 150GB.
If you temporarily need more storage space, you can request scratch space in our general access scratch volume.
If you are a member of a research group with dedicated shared storage space, you will have access to directories other than your home directory. To manage your enrollments in groups, visit our self-service portal. To see what groups you are enrolled in and resources you have access to, use the RCCTool:
Tools available on the HPC#
There are many software packages, libraries, and programming environments installed on the HPC. Additionally, all the standard Linux utilities are available (text editors such as VI, Nano, and CLI tools, etc.). See our software inventory for a more comprehensive list.
Copying data to and from the HPC#
You will very likely need to copy data to and from external systems, such as your workstation or other servers onto the HPC, or vide versa. For details about how to transfer data to and from the HPC parallel filesystem, refer to our storage overview.
Commonly used tools#
We provide quickstart guides for many commonly used packages and platforms:
If you need to compile custom software to run on the HPC, we provide several compilers on the HPC:
If you are new to the HPC, we highly recommend our tutorial for compiling and submitting a parallel program.
Submitting HPC jobs#
The primary purpose of the HPC Login nodes is to provide user access to our job scheduler, Slurm. The scheduler allows researchers to submit and manage jobs on our HPC Compute Nodes, which actually run the jobs.