IT wiki

VU MIF STSC

User Tools

Site Tools


en:hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
en:hpc [2022/04/14 13:47] – [Software] grikieteen:hpc [2022/07/04 08:16] – [PST Partition] grikiete
Line 1: Line 1:
 ====== Description of the Equipment ====== ====== Description of the Equipment ======
  
-Distributed Computing Network (DCN) is a specially designed network of computers capable of running applications that can exchange data efficiently.+High Performance Computing (HPC) is a specially designed network of computers capable of running applications that can exchange data efficiently.
  
-VU MIF PST consists of a supercomputer from the clusters (the first number is the actual and available amount):+VU MIF HPC consists of a supercomputer from the clusters (the first number is the actual and available amount):
  
 ^Title ^Nodes ^CPU ^GPU ^RAM        ^HDD    ^Network ^Notes| ^Title ^Nodes ^CPU ^GPU ^RAM        ^HDD    ^Network ^Notes|
Line 16: Line 16:
 ====== Software ====== ====== Software ======
  
-**main** and **gpu** are installed [[https://docs.qlustar.com/Qlustar/11.0/HPCstack/hpc-user-manual.html|Qlustar 11]] operating system (OS) with Linux core. It is created Ubuntu 18.04 LTS based. **power** is installed Ubuntu 18.04 LTS.+In **main** and **gpu** partitions there are installed [[https://docs.qlustar.com/Qlustar/11.0/HPCstack/hpc-user-manual.html|Qlustar 11]] OS. It is based on Ubuntu 18.04 LTS. In **power** partition there is installed Ubuntu 18.04 LTS.
  
-You can check the list of OS package with the command ''dpkg -l'' (in log in node **hpc** or in **power** nodes).+You can check the list of OS package with the command ''dpkg -l'' (in login node **hpc** or in **power** nodes).
  
-With the command [[https://sylabs.io/guides/3.2/user-guide/index.html|singularity]] it is possible to make use of ready-made copies of container files in directories ''/apps/local/hpc'', ''/apps/local/nvidia'', ''/apps/local/intel'', ''/apps/local/lang'' or to download from singularity and docker online directories. You can also create your own singularity containers using the MIF cloud service.+With the command [[https://sylabs.io/guides/3.2/user-guide/index.html|singularity]] it is possible to make use of ready-made copies of container files in directories ''/apps/local/hpc'', ''/apps/local/nvidia'', ''/apps/local/intel'', ''/apps/local/lang'' or to download from singularity and docker online repositories. You can also create your own singularity containers using the MIF cloud service.
  
-With singularity you can prepare your container, for example:+You can prepare your container with singularity, for example:
 <code shell> <code shell>
 $ singularity build --sandbox /tmp/python docker://python:3.8 $ singularity build --sandbox /tmp/python docker://python:3.8
Line 31: Line 31:
 Similarly, you can use R, Julia or other containers that do not require root privileges to install packages. Similarly, you can use R, Julia or other containers that do not require root privileges to install packages.
  
-If you want to add OS packages to the singularity container, you need root/superuser privileges. With fakeroot, we simulate them, and copy the required library ''libfakeroot-sysv.so'' into the container, example:+If you want to add OS packages to the singularity container, you need root/superuser privileges. With fakeroot, we simulate them, and copy the required library ''libfakeroot-sysv.so'' into the container, for example:
 <code shell> <code shell>
 $ singularity build --sandbox /tmp/python docker://ubuntu:18.04 $ singularity build --sandbox /tmp/python docker://ubuntu:18.04
Line 43: Line 43:
 </code> </code>
  
-Kataloge ''/apps/local/bigdata'' yra paruošti scenarijai pasileisti savo **hadoop** užduotis pasinaudojant [[https://github.com/LLNL/magpie|Magpie]] rinkiniu.+There are ready-made scripts to run your **hadoop** tasks using the [[https://github.com/LLNL/magpie|Magpie]] set in the directory ''/apps/local/bigdata''.
  
-Su [[https://hpc.mif.vu.lt/hub/|JupyterHub]] galite interneto naršyklės pagalba vykdyti skaičiavimus su python komandų eilute ir pasinaudoti [[https://jupyter.org|JupyterLab]] aplinkaJeigu savo namų kataloge instaliuosite savo JupyterLab aplinkątai reikia instaliuoti papildomai ''batchspawner'' paketą tada jums startuos jūsų aplinkąpvz:+With [[https://hpc.mif.vu.lt/hub/|JupyterHub]] you can run calculations with the python command line in a web browser and use the [[https://jupyter.org|JupyterLab]] environmentIf you install your own JupyterLab environment in your home directoryyou need to install the additional ''batchspawner'' package this will start your environmentfor example:
  
 <code shell> <code shell>
Line 52: Line 52:
 </code> </code>
  
-Taip pat jūs galite pasinaudoti savo pasidarytu konteineriu per JupyterHub. Tame konteineryje reikia instaliuoti ''batchswapner'' ir ''jupyterlab'' paketus bei sukurti script'ą ''~/.local/bin/batchspawner-singleuser'' su vykdymo teisėmis (''chmod +x ~/.local/bin/batchspawner-singleuser'')+Alternatively, you can use a container that you made via JupyterHub. In that container, you need to install the ''batchswapner'' and ''jupyterlab'' packages, and to create a script ''~/.local/bin/batchspawner-singleuser'' with execution permissions (''chmod +x ~/.local/bin/batchspawner-singleuser'').
 <code shell> <code shell>
 #!/bin/sh #!/bin/sh
 exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser "$@" exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser "$@"
 </code> </code>
 +
 +====== Registration ======
 +
 +  * **For VU MIF network users** - HPC can be used without additional registration if the available resources are enough (monthly limit - **100 CPU-h and 6 GPU-h**). Once this limit has been reached, you can request more by filling in [[https://forms.office.com/Pages/ResponsePage.aspx?id=ghrFgo1UykO8-b9LfrHQEidLsh79nRJAvOP_wV9sgmdUM0ZMR1FINFg3TzVaNlhDSEhUN1A3QTlVUC4u|ITOAC service request form]]. 
 +
 +  * **For users of the VU computer network** - you must fill in the [[https://forms.office.com/Pages/ResponsePage.aspx?id=ghrFgo1UykO8-b9LfrHQEidLsh79nRJAvOP_wV9sgmdUM0ZMR1FINFg3TzVaNlhDSEhUN1A3QTlVUC4u|ITOAC service request form]] to get access to MIF HPC. After the confirmation of your request, you must create your account in [[https://hpc.mif.vu.lt|Waldur portal]]. More details read [[waldur|here]].
 +
 +  * **For other users (non-members of the VU community)** - you must fill in the [[https://forms.office.com/Pages/ResponsePage.aspx?id=ghrFgo1UykO8-b9LfrHQEidLsh79nRJAvOP_wV9sgmdUMDE1QUo3Slo3UVYwTjM4TDMyTEdZT0tSNi4u|ITOAC service request form]] to get access to MIF HPC. After the confirmation of your request, you must come to VU MIF Didlaukio str. 47, Room 302/304 to receive your login credentials. Please arranged the exact time by phone + 370 5219 5005. With these credentials you are able to create an account in [[https://hpc.mif.vu.lt|Waldur portal]]. More details read [[waldur|here]].
 +
 +====== Connection ======
 +
 +You need to use SSH applications (ssh, putty, winscp, mobaxterm) and Kerberos or SSH key authentication to connect to **HPC**.
 +
 +If **Kerberos** is used:
 +
 +  * Log in to the Linux environment in a VU MIF classroom or public terminal with your VU MIF username and password or login to **uosis.mif.vu.lt** with your VU MIF username and password using **ssh** or **putty**.
 +  * Check if you have a valid Kerberos key (ticket) with the **klist** command. If the key is not available or has expired, the **kinit** command must be used.
 +  * Connect to the **hpc** node with the command **ssh hpc** (password must not be required).
 +
 +If **SSH keys** are used (e.g. if you need to copy big files):
 +  * If you don't have SSH keys, you can find instructions on how to create them in a Windows environment **[[duk:ssh_key|here]]**
 +  *     Before you can use this method, you need to log in with Kerberos at least once. Then create a ''~/.ssh'' directory in the HPC file system and put your **ssh public key** (in OpenSSH format) into the ''~/.ssh/authorized_keys'' file.
 +  *     Connect with **ssh**, **sftp**, **scp**, **putty**, **winscp** or any other **ssh** protocol supported software to **hpc.mif.vu.lt** with your **ssh private key**, specifying your VU MIF user name. It should not require a login password, but may require your ssh private key password.
 +
 +The **first time** you connect, you **will not** be able to run **SLURM jobs** for the first **5 minutes**. After that, SLURM account will be created.
 +
 +====== Lustre - Shared File System ======
 +
 +VU MIF HPC shared file system is available in the directory ''/scratch/lustre''.
 +
 +The system creates directory ''/scratch/lustre/home/username'' for each HPC user, where **username** is the HPC username.
 +
 +The files in this file system are equally accessible on all compute nodes and on the **hpc** node.
 +
 +Please use these directories only for their purpose and clean them up after calculations.
 +
 +====== HPC Partition ======
 +
 +^Partition ^Time limit ^RAM    ^Notes|
 +^main             ^7d            ^7000MB  ^CPU cluster|
 +^gpu              ^48h           ^12000MB ^GPU cluster|
 +^power            ^48h           ^2000MB  ^IBM Power9 cluster|
 +
 +The time limit for tasks is **2h** in all partitions if it has not been specified. The table shows the maximum time limit.
 +
 +The **RAM** column gives the amount of RAM allocated to each reserved **CPU** core.
 +
 +
 +
 +
en/hpc.txt · Last modified: 2024/02/21 12:50 by rolnas

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki