HPC README FIRST

Accounts and Logging In

  • Login to the Lipscomb cluster by using ssh to dlx.uky.edu and authenticating with your link blue userid and password.
  • When you logon to the cluster, please read the message of the day (motd). It often has important information about updates, current issues, and problems. Display the motd at any time with the sysstatus command.
  • For information about getting an HPC account, see the HPC Account Information page.
  • If you have not yet changed your link blue password from the default, please go to the UK Account Manager and change it as soon as possible.
  • If you don't want to type your link blue password each time you logon to the cluster, look into setting up an ssh public key to authenticate. See How do I set up an ssh key pair? for details.
  • Please set up a dot forward (.forward) file as soon as possible. Forward email on the login node to an email address that you read regularly. This may be the only way the sysadmins can contact you about problems with your jobs, your account, the file system, or other issues.

Getting Help

  • New users should start with the Getting Started pages.
  • For help with your link blue account (userid or password) or for general IT questions, contact the IT Service Desk at 859-218-HELP (859-218-4357) or helpdesk@uky.edu.
  • For information about current HPC problems and outages, see the HPC News blog.
  • For the answers to many questions, see the HPC FAQ page.
  • See the HPC Announcements page for general announcements about the facility.
  • To see what software packages are installed use the module avail command.
  • To report problems or ask for technical help with HPC, email help-hpc@uky.edu (include your userid).

Cautions

  • Files in scratch should not be left more than 30 days. Older files may be purged without warning.
  • There are no backups for scratch.
  • Please DO NOT execute PARALLEL runs on the login nodes. Use the batch scheduler.
  • Please DO NOT run long serial jobs on the login nodes. Use the batch scheduler.
  • Please DO NOT run non GPU enabled code on the GPU nodes.
  • Please use disk space wisely, both home and scratch. The resources are shared by everyone on the cluster.
  • If you are using code from the previous DLX cluster, we strongly recommend you recompile your source code with the current Intel compilers.
  • All of the job queues have wall clock time limits and processor core limits. You normally just specify the time and cores you need, except in a few cases, and let the schedule pick the queue. To see all the queues currently defined and their limits, use the queue_wcl command. The queues and their limits are subject to change at any time.

General information

  • User directories in /scratch on the cluster are automounted by the file system as they are needed. When you list the subdirectories under /scratch (that is, ls /scratch), you will see only a few of them, the ones that are currently in use. However, there are hundreds of userids and each has its own scratch directory. When you cd to one (cd /scratch/userid) it will be 'automagically' mounted for you.
  • The Lipscomb cluster has 256 Basic Nodes with 16 cores and 64 GB of memory each.
  • If your memory requirements for a single node exceed the Basic Nodes, try one of the 8 Hi-Mem ('Fat') Nodes with 32 cores and 512 GB of memory each.
  • For a detailed description of the cluster hardware, see the Hardware page.
  • The default quota for a home directory is 1 TB (1000 GB). To display your quota use the quota command. Home directory quotas are not currently enforced by the filesystem, but they will be in the future.
  • There is no quota on your scratch directory (/scratch/userid), but remember the scratch areas are not backed up.
  • The emacs and vim (vi) editors are available. Enter the vimtutor command to bring up a short tutorial and introduction to vim. Here is the vim User Manual.
  • If you want to display graphics on your workstation, you'll probably want to run an X server. Add the '-Y' flag to the ssh command or setup X tunneling on your ssh client. See How do I set up an X server? for details.

Compiling

  • To compile Fortran 77 or 90 programs with the Intel compiler, use the ifort command.
  • To compile c or c++ programs with the Intel compiler, use the icc command.
  • To compile with the OpenMPI libraries included, use the mpif90 or mpicc commands instead.
  • For information on the Intel Compilers, see the Intel Compilers page.
  • For information on the GNU compilers, see the GNU Compilers page.
  • The Intel Math Kernel Library (IMKL) has optimized versions of BLAS, LAPACK, BLACS and SCALAPACK.
  • To link in the IMKL, use -L$(INTEL_MKL_LIBS) and -lmkl in your makefile.
  • See the Intel Library page for details on the IMKL.

Batch jobs

  • The batch scheduler is Moab and the resource manager is SLURM. For more information, see the Batch Job FAQ page.
  • Look in the /share/cluster/examples directory for sample jobs.
  • If your job does not start executing soon after you submit it, use the checkjob command to see why. For example,
    checkjob -v 12345
    BLOCK MSG: job 12345 violates active SOFT MAXNODE limit of 16
    for user <userid> (Req: 1 InUse: 16) (recorded at last scheduling iteration)
  • Use the batchg09 command to run Gaussian jobs.
  • Use the command sbatch script to submit a job, where script is the name of your job script.
  • The srun command is no longer supported. Use the sbatch command instead.
  • Use the squeue command to check the job queues. For example, squeue -u userid, where userid is your userid, will list all of the jobs you have queued.
  • Use the command scancel job_id to terminate a batch job.
  • Basic compute node quotas are:
    16 nodes per user when other eligible jobs are waiting.
    64 nodes per user when no other eligible jobs are waiting.
  • Group node quotas may also apply.
  • Per-user and Group node quotas are subject to change.

859-218-HELP (859-218-4357) 218help@uky.edu