en:hpc
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionNext revisionBoth sides next revision | ||
en:hpc [2022/04/14 13:17] – created grikiete | en:hpc [2022/07/04 08:13] – [PST eilės (partition)] grikiete | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Description of the Equipment ====== | ====== Description of the Equipment ====== | ||
- | A Distributed | + | A High Performance |
- | VU MIF PST consists of a supercomputer from the clusters (the first number is the actual and available amount ): | + | VU MIF HPC consists of a supercomputer from the clusters (the first number is the actual and available amount): |
- | ^Pavadinimas | + | ^Title ^Nodes ^CPU ^GPU ^RAM ^HDD ^Network |
^main ^35/ | ^main ^35/ | ||
^gpu | ^gpu | ||
^power | ^power | ||
- | Iš viso **40/ | + | Total **40/ |
+ | |||
+ | The processor below = CPU = core - a single core of the processor (with all hyperthreads if they are turned on). | ||
+ | |||
+ | ====== Software ====== | ||
+ | |||
+ | In **main** and **gpu** partitions there are installed [[https:// | ||
+ | |||
+ | You can check the list of OS package with the command '' | ||
+ | |||
+ | With the command [[https:// | ||
+ | |||
+ | You can prepare your container with singularity, | ||
+ | <code shell> | ||
+ | $ singularity build --sandbox /tmp/python docker:// | ||
+ | $ singularity exec -w /tmp/python pip install package | ||
+ | $ singularity build python.sif / | ||
+ | $ rm -rf / | ||
+ | </ | ||
+ | Similarly, you can use R, Julia or other containers that do not require root privileges to install packages. | ||
+ | |||
+ | If you want to add OS packages to the singularity container, you need root/ | ||
+ | <code shell> | ||
+ | $ singularity build --sandbox /tmp/python docker:// | ||
+ | $ cp / | ||
+ | $ fakeroot -l / | ||
+ | $ fakeroot -l / | ||
+ | $ fakeroot -l / | ||
+ | $ rm -rf / | ||
+ | $ singularity build python.sif / | ||
+ | $ rm -rf / | ||
+ | </ | ||
+ | |||
+ | There are ready-made scripts to run your **hadoop** tasks using the [[https:// | ||
+ | |||
+ | With [[https:// | ||
+ | |||
+ | <code shell> | ||
+ | $ python3.7 -m pip install --upgrade pip setuptools wheel | ||
+ | $ python3.7 -m pip install --ignore-installed batchspawner jupyterlab | ||
+ | </ | ||
+ | |||
+ | Alternatively, | ||
+ | <code shell> | ||
+ | #!/bin/sh | ||
+ | exec singularity exec --nv myjupyterlab.sif batchspawner-singleuser " | ||
+ | </ | ||
+ | |||
+ | ====== Registration ====== | ||
+ | |||
+ | * **For VU MIF network users** - HPC can be used without additional registration if the available resources are enough (monthly limit - **100 CPU-h and 6 GPU-h**). Once this limit has been reached, you can request more by filling in [[https:// | ||
+ | |||
+ | * **For users of the VU computer network** - you must fill in the [[https:// | ||
+ | |||
+ | * **For other users (non-members of the VU community)** - you must fill in the [[https:// | ||
+ | |||
+ | ====== Connection ====== | ||
+ | |||
+ | You need to use SSH applications (ssh, putty, winscp, mobaxterm) and Kerberos or SSH key authentication to connect to **HPC**. | ||
+ | |||
+ | If **Kerberos** is used: | ||
+ | |||
+ | * Log in to the Linux environment in a VU MIF classroom or public terminal with your VU MIF username and password or login to **uosis.mif.vu.lt** with your VU MIF username and password using **ssh** or **putty**. | ||
+ | * Check if you have a valid Kerberos key (ticket) with the **klist** command. If the key is not available or has expired, the **kinit** command must be used. | ||
+ | * Connect to the **hpc** node with the command **ssh hpc** (password must not be required). | ||
+ | |||
+ | If **SSH keys** are used (e.g. if you need to copy big files): | ||
+ | * If you don't have SSH keys, you can find instructions on how to create them in a Windows environment **[[duk: | ||
+ | * | ||
+ | * | ||
+ | |||
+ | The **first time** you connect, you **will not** be able to run **SLURM jobs** for the first **5 minutes**. After that, SLURM account will be created. | ||
+ | |||
+ | ====== Lustre - Shared File System ====== | ||
+ | |||
+ | VU MIF HPC shared file system is available in the directory ''/ | ||
+ | |||
+ | The system creates directory ''/ | ||
+ | |||
+ | The files in this file system are equally accessible on all compute nodes and on the **hpc** node. | ||
+ | |||
+ | Please use these directories only for their purpose and clean them up after calculations. | ||
+ | |||
+ | ====== PST Partition ====== | ||
+ | |||
+ | ^Partition ^Time limit ^RAM | ||
+ | ^main | ||
+ | ^gpu ^48h | ||
+ | ^power | ||
+ | |||
+ | The time limit for tasks is **2h** in all partitions if it has not been specified. The table shows the maximum time limit. | ||
+ | |||
+ | The **RAM** column gives the amount of RAM allocated to each reserved **CPU** core. | ||
+ | |||
+ | |||
- | Toliau tekste procesorius = CPU = core - procesoriaus vienas branduolys (su visomis hypergijomis, |
en/hpc.txt · Last modified: 2024/02/21 12:50 by rolnas