HPC README FIRST
Accounts and Logging In
- Login to the Lipscomb cluster by using ssh to dlx.uky.edu and authenticating with your link blue userid and password.
- For information about getting an HPC account, see the HPC Account Information page.
- If you have not yet changed your link blue password from the default, please go to the UK Account Manager and change it as soon as possible.
- If you tire of typing your link blue password each time you logon to the cluster, you may want to set up an ssh public key to authenticate. See How do I set up an ssh key pair? for details.
- Please set up a dot forward (.forward) file as soon as possible. Forward email on the login node to an email address that you read regularly. This may be the only way we can contact you about problems with your jobs, your account, the file system, or other issues.
- New users should start with the Getting Started pages.
- For help with your link blue account (userid or password) or for general IT questions, contact the IT Service Desk at 859-218-HELP (859-218-4357) or email@example.com.
- For information about current HPC problems and outages, see the HPC News blog.
- When you logon to the cluster, please read the message of the day (motd). It often has important information about current issues. You can display the motd at any time with the sysstatus command.
- For the answers to many questions, see the HPC FAQ page.
- See the HPC Announcements page for general announcements about the facility.
- To see what software packages are installed use the module avail command.
- To report problems or ask for technical help with HPC, email firstname.lastname@example.org (include your userid).
- Files in scratch should not be left more than 30 days. Older files may be purged without warning.
- There are no backups for scratch.
- DO NOT execute PARALLEL runs on the login nodes. Use the batch scheduler.
- DO NOT run long serial jobs on the login nodes. Use the batch scheduler.
- Please use disk space wisely, both home and scratch. The resources are shared by everyone on the cluster.
- User directories in /scratch on the cluster are automounted by the file system as they are needed. If you list all subdirectories under /scratch (ls /scratch), you'll see only a few, the ones currently in use. However, there are hundreds of userids and each has a scratch directory. Just cd to one (cd /scratch/userid) and it will be 'magically' mounted for you.
- The Lipscomb cluster has 256 Basic Nodes with 16 cores and 64 GB of memory each.
- If your memory requirements for a single node exceed the Basic Nodes, try one of the 8 Hi-Mem ('Fat') Nodes with 32 cores and 512 GB of memory each.
- For a detailed description of the cluster hardware, see the Hardware page.
- The default quota for a home directory is 1 TB (1000 GB).
- There is no quota on your scratch directory (/scratch/userid), but remember the scratch areas are not backed up.
- The emacs and vim (vi) editors are available. Enter the vimtutor command to bring up a short tutorial and introduction to vim. Here is the vim User Manual.
- If you want to display graphics on your workstation, you'll probably want to run an X server. Add the '-Y' flag to the ssh command or setup X tunneling on your ssh client. See How do I set up an X server? for details.
- To compile Fortran 77 or 90 programs with the Intel compiler, use the ifort command.
- To compile c or c++ programs with the Intel compiler, use the icc command.
- To compile with the OpenMPI libraries included, use the mpif90 or mpicc commands instead.
- For information on the Intel Compilers, see the Intel Compilers page.
- For information on the GNU compilers, see the GNU Compilers page.
- The Intel Math Kernel Library (IMKL) has optimized versions of BLAS, LAPACK, BLACS and SCALAPACK.
- To link in the IMKL, use -L$(INTEL_MKL_LIBS) and -lmkl in your makefile.
- See the Intel Library page for details on the IMKL.
- The batch scheduler is Moab and the resource manager is SLURM. For more information, see the Batch Job FAQ page.
- Look in the /share/cluster/examples directory for sample jobs.
- Use the batchg09 command to run Gaussian jobs.
- Use the command sbatch script to submit a job, where script is the name of your job script.
- The srun command is no longer supported. Use the sbatch command instead.
- Use the squeue command to check the job queues. For example, squeue -u userid, where userid is your userid, will list all of the jobs you have queued.
- Use the command scancel job_id to terminate a batch job.
- Basic compute node quotas are:
16 nodes per user when other eligible jobs are waiting.
32 nodes per user when no other eligible jobs are waiting.
- Group node quotas may also apply.
- Per-user and Group node quotas are subject to change.