en:hpc
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
en:hpc [2022/04/14 13:49] – [Software] grikiete | en:hpc [2022/07/04 08:09] – grikiete | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Description of the Equipment ====== | ====== Description of the Equipment ====== | ||
- | A Distributed | + | A High Performance |
- | VU MIF PST consists of a supercomputer from the clusters (the first number is the actual and available amount): | + | VU MIF HPC consists of a supercomputer from the clusters (the first number is the actual and available amount): |
^Title ^Nodes ^CPU ^GPU ^RAM ^HDD ^Network ^Notes| | ^Title ^Nodes ^CPU ^GPU ^RAM ^HDD ^Network ^Notes| | ||
Line 16: | Line 16: | ||
====== Software ====== | ====== Software ====== | ||
- | **main** and **gpu** are installed [[https:// | + | In **main** and **gpu** |
- | You can check the list of OS package with the command '' | + | You can check the list of OS package with the command '' |
- | With the command [[https:// | + | With the command [[https:// |
- | With singularity you can prepare your container, for example: | + | You can prepare your container |
<code shell> | <code shell> | ||
$ singularity build --sandbox /tmp/python docker:// | $ singularity build --sandbox /tmp/python docker:// | ||
Line 43: | Line 43: | ||
</ | </ | ||
- | In the directory ''/ | + | There are ready-made scripts to run your **hadoop** tasks using the [[https:// |
- | Su [[https:// | + | With [[https:// |
<code shell> | <code shell> | ||
Line 52: | Line 52: | ||
</ | </ | ||
- | Taip pat jūs galite pasinaudoti savo pasidarytu konteineriu per JupyterHub. | + | Alternatively, |
<code shell> | <code shell> | ||
#!/bin/sh | #!/bin/sh | ||
exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser " | exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser " | ||
</ | </ | ||
+ | |||
+ | ====== Registration ====== | ||
+ | |||
+ | * **For VU MIF network users** - HPC can be used without additional registration if the available resources are enough (monthly limit - **100 CPU-h and 6 GPU-h**). Once this limit has been reached, you can request more by filling in [[https:// | ||
+ | |||
+ | * **For users of the VU computer network** - you must fill in the [[https:// | ||
+ | |||
+ | * **For other users (non-members of the VU community)** - you must fill in the [[https:// | ||
+ | |||
+ | ====== Connection ====== | ||
+ | |||
+ | You need to use SSH applications (ssh, putty, winscp, mobaxterm) and Kerberos or SSH key authentication to connect to **HPC**. | ||
+ | |||
+ | If **Kerberos** is used: | ||
+ | |||
+ | * Log in to the Linux environment in a VU MIF classroom or public terminal with your VU MIF username and password or login to **uosis.mif.vu.lt** with your VU MIF username and password using **ssh** or **putty**. | ||
+ | * Check if you have a valid Kerberos key (ticket) with the **klist** command. If the key is not available or has expired, the **kinit** command must be used. | ||
+ | * Connect to the **hpc** node with the command **ssh hpc** (password must not be required). | ||
+ | |||
+ | If **SSH keys** are used (e.g. if you need to copy big files): | ||
+ | * If you don't have SSH keys, you can find instructions on how to create them in a Windows environment **[[duk: | ||
+ | * | ||
+ | * | ||
+ | |||
+ | The **first time** you connect, you **will not** be able to run **SLURM jobs** for the first **5 minutes**. After that, SLURM account will be created. | ||
+ | |||
+ | ====== Lustre - Shared File System ====== | ||
+ | |||
+ | VU MIF HPC shared file system is available in the directory ''/ | ||
+ | |||
+ | The system creates directory ''/ | ||
+ | |||
+ | The files in this file system are equally accessible on all compute nodes and on the **hpc** node. | ||
+ | |||
+ | Please use these directories only for their purpose and clean them up after calculations. | ||
+ | |||
+ | ====== PST eilės (partition) ====== | ||
+ | |||
+ | ^Eilė (partition) ^Laiko limitas ^RAM | ||
+ | ^main | ||
+ | ^gpu ^48h | ||
+ | ^power | ||
+ | |||
+ | Visose eilėse užduotims laiko limitas yra **2h**, jei jis nebuvo nurodytas, o lentelėje yra pateiktas maksimalus leidžiamas laiko limitas. | ||
+ | |||
+ | **RAM** stulpelyje yra pateikiamas kiekvienam rezervuotam **CPU** branduoliui skiriamas RAM kiekis. | ||
+ | |||
+ | |||
+ |
en/hpc.txt · Last modified: 2024/02/21 12:50 by rolnas