IT wiki

VU MIF STSC

User Tools

Site Tools


en:hpc

This is an old revision of the document!


Description of the Equipment

A High Performance Computing (HPC) is a specially designed network of computers capable of running applications that can exchange data efficiently.

VU MIF HPC consists of a supercomputer from the clusters (the first number is the actual and available amount):

Title Nodes CPU GPU RAM HDD Network Notes
main 35/36 48 0 384GiB 0 1Gbit/s, 2x10Gbit/s, 4xEDR(100Gbit/s) infiniband CPU
gpu 3/3 40 8 512GB/32GB 7TB 2x10Gbit/s, 4xEDR(100Gbit/s) infiniband CPU NVIDIA DGX-1
power 2/2 32 4 1024GB/32GB 1.8TB 2x10Gbit/s, 4xEDR(100Gbit/s) infiniband IBM Power System AC922

Total 40/41 nodes, 1912 CPU cores with 17TB RAM, 32 GPU with 1TB RAM.

The processor below = CPU = core - a single core of the processor (with all hyperthreads if they are turned on).

Software

In main and gpu partitions there are installed Qlustar 11 OS. It is based on Ubuntu 18.04 LTS. In power partition there is installed Ubuntu 18.04 LTS.

You can check the list of OS package with the command dpkg -l (in login node hpc or in power nodes).

With the command singularity it is possible to make use of ready-made copies of container files in directories /apps/local/hpc, /apps/local/nvidia, /apps/local/intel, /apps/local/lang or to download from singularity and docker online repositories. You can also create your own singularity containers using the MIF cloud service.

You can prepare your container with singularity, for example:

$ singularity build --sandbox /tmp/python docker://python:3.8
$ singularity exec -w /tmp/python pip install package
$ singularity build python.sif /tmp/python
$ rm -rf /tmp/python

Similarly, you can use R, Julia or other containers that do not require root privileges to install packages.

If you want to add OS packages to the singularity container, you need root/superuser privileges. With fakeroot, we simulate them, and copy the required library libfakeroot-sysv.so into the container, for example:

$ singularity build --sandbox /tmp/python docker://ubuntu:18.04
$ cp /libfakeroot-sysv.so /tmp/python/
$ fakeroot -l /libfakeroot-sysv.so singularity exec -w /tmp/python apt-get update
$ fakeroot -l /libfakeroot-sysv.so singularity exec -w /tmp/python apt-get install python3.8 ...
$ fakeroot -l /libfakeroot-sysv.so singularity exec -w /tmp/python apt-get clean
$ rm -rf /tmp/python/libfakeroot-sysv.so /tmp/python/var/lib/apt/lists (you can clean up more of what you don't need)
$ singularity build python.sif /tmp/python
$ rm -rf /tmp/python

There are ready-made scripts to run your hadoop tasks using the Magpie set in the directory /apps/local/bigdata.

With JupyterHub you can run calculations with the python command line in a web browser and use the JupyterLab environment. If you install your own JupyterLab environment in your home directory, you need to install the additional batchspawner package - this will start your environment, for example:

$ python3.7 -m pip install --upgrade pip setuptools wheel
$ python3.7 -m pip install --ignore-installed batchspawner jupyterlab

Alternatively, you can use a container that you made via JupyterHub. In that container, you need to install the batchswapner and jupyterlab packages, and to create a script ~/.local/bin/batchspawner-singleuser with execution permissions (chmod +x ~/.local/bin/batchspawner-singleuser).

#!/bin/sh
exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser "$@"

Registration

  • For VU MIF network users - HPC can be used without additional registration if the available resources are enough (monthly limit - 100 CPU-h and 6 GPU-h). Once this limit has been reached, you can request more by filling in ITOAC service request form.
  • For users of the VU computer network - you must fill in the ITOAC service request form to get access to MIF HPC. After the confirmation of your request, you must create your account in Waldur portal. More details read here.
  • For other users (non-members of the VU community) - you must fill in the ITOAC service request form to get access to MIF HPC. After the confirmation of your request, you must come to VU MIF Didlaukio str. 47, Room 302/304 to receive your login credentials. Please arranged the exact time by phone + 370 5219 5005. With these credentials you are able to create an account in Waldur portal. More details read here.

Connection

You need to use SSH applications (ssh, putty, winscp, mobaxterm) and Kerberos or SSH key authentication to connect to HPC.

If Kerberos is used:

  • Log in to the Linux environment in a VU MIF classroom or public terminal with your VU MIF username and password or login to uosis.mif.vu.lt with your VU MIF username and password using ssh or putty.
  • Check if you have a valid Kerberos key (ticket) with the klist command. If the key is not available or has expired, the kinit command must be used.
  • Connect to the hpc node with the command ssh hpc (password must not be required).

If SSH keys are used (e.g. if you need to copy big files):

  • If you don't have SSH keys, you can find instructions on how to create them in a Windows environment here
  • Before you can use this method, you need to log in with Kerberos at least once. Then create a ~/.ssh directory in the HPC file system and put your ssh public key (in OpenSSH format) into the ~/.ssh/authorized_keys file.
  • Connect with ssh, sftp, scp, putty, winscp or any other ssh protocol supported software to hpc.mif.vu.lt with your ssh private key, specifying your VU MIF user name. It should not require a login password, but may require your ssh private key password.

The first time you connect, you will not be able to run SLURM jobs for the first 5 minutes. After that, SLURM account will be created.

Lustre - Shared File System

VU MIF HPC shared file system is available in the directory /scratch/lustre.

The system creates directory /scratch/lustre/home/username for each HPC user, where username is the HPC username.

The files in this file system are equally accessible on all compute nodes and on the hpc node.

Please use these directories only for their purpose and clean them up after calculations.

HPC Partition

Partition Time limit RAM Notes
main 7d 7000MB CPU cluster
gpu 48h 12000MB GPU cluster
power 48h 2000MB IBM Power9 cluster

The time limit for tasks is 2h in all partitions if it has not been specified. The table shows the maximum time limit.

The RAM column gives the amount of RAM allocated to each reserved CPU core.

Batch Processing of Tasks (SLURM)

To use computing resources of the HPC, you need to create task scenarios (sh or csh).

Example:

mpi-test-job.sh
#!/bin/bash
#SBATCH -p main
#SBATCH -n4
module load openmpi
mpicc -o mpi-test mpi-test.c
mpirun mpi-test

After submission and confirmation of your application to the ITOAC services, you need to create a user at https://hpc.mif.vu.lt/. The created user will be included in the relevant project, which will have a certain amount of resources. In order to use the project resources for calculations, you need to provide your allocation number. Below is an example with the allocation parameter “alloc_xxxx_project” (not applicable for VU MIF users, VU MIF users do not have to specify the –account parameter).

mpi-test-job.sh
#!/bin/bash
#SBATCH --account=alloc_xxxx_projektas
#SBATCH -p main
#SBATCH -n4
#SBATCH --time=minutes
module load openmpi
mpicc -o mpi-test mpi-test.c
mpirun mpi-test

Jame kaip specialūs komentarai yra nurodymai užduočių vykdytojui.

-p short - į kokią eilę siųsti (main, gpu, power).

-n4 - kiek procesorių rezervuoti (PASTABA: nustačius naudotinų branduolių skaičių x, tačiau realiai programiškai išnaudojant mažiau, apskaitoje vis tiek bus skaičiuojami visi x “užprašyti” branduoliai, todėl rekomenduojame apsiskaičiuoti iš anksto).

Užduoties pradinis einamasis katalogas yra dabartinis katalogas (pwd) prisijungimo mazge iš kur paleidžiama užduotis, nebent parametru -D pakeistas į kitą. Pradiniam einamajam katalogui naudokite PST bendros failų sistemos katalogus /scratch/lustre, nes jis turi egzistuoti skaičiavimo mazge ir ten yra kuriamas užduoties išvesties failas slurm-JOBID.out, nebent nukreiptas kitur parametrais -o arba -i (jiems irgi patariama naudoti bendrą failų sistemą).

Suformuotą scenarijų siunčiame su komanda sbatch

$ sbatch mpi-test-job

kuri gražina pateiktos užduoties numerį JOBID.

Laukiančios arba vykdomos užduoties būseną galima sužinoti su komanda squeue

$ squeue -j JOBID

Su komanda scancel galima nutraukti užduoties vykdymą arba išimti ją iš eilės

$ scancel JOBID

Jeigu neatsimenate savo užduočių JOBID, tai galite pasižiūrėti su komanda squeue

$ squeue

Užbaigtų užduočių squeue jau neberodo.

Jeigu nurodytas procesorių kiekis nėra pasiekiamas, tai jūsų užduotis yra įterpiama į eilę. Joje ji bus kol atsilaisvins pakankamas kiekis procesorių arba kol jūs ją pašalinsite su scancel.

Vykdomos užduoties išvestis (output) yra įrašoma į failą slurm-JOBID.out. Jei nenurodyta kitaip, tai ir klaidų (error) išvestis yra įrašoma į tą patį failą. Failų vardus galima pakeisti su komandos sbatch parametrais -o (nurodyti išvesties failą) ir -e (nurodyti klaidų failą).

Daugiau apie SLURM galimybes galite paskaityti Quick Start User Guide.

en/hpc.1656923811.txt.gz · Last modified: 2022/07/04 08:36 by grikiete

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki