en:hpc
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
en:hpc [2022/07/04 09:48] – [Batch Processing of Tasks (SLURM)] grikiete | en:hpc [2023/04/20 10:26] – rolnas | ||
---|---|---|---|
Line 16: | Line 16: | ||
====== Software ====== | ====== Software ====== | ||
- | In **main** and **gpu** partitions there are installed [[https:// | + | In **main** and **gpu** partitions there are installed [[https:// |
You can check the list of OS package with the command '' | You can check the list of OS package with the command '' | ||
Line 60: | Line 60: | ||
====== Registration ====== | ====== Registration ====== | ||
- | * **For VU MIF network users** - HPC can be used without additional registration if the available resources are enough (monthly limit - **100 CPU-h and 6 GPU-h**). Once this limit has been reached, you can request more by filling in [[https:// | + | * **For VU MIF network users** - HPC can be used without additional registration if the available resources are enough (monthly limit - **500 CPU-h and 60 GPU-h**). Once this limit has been reached, you can request more by filling in [[https:// |
* **For users of the VU computer network** - you must fill in the [[https:// | * **For users of the VU computer network** - you must fill in the [[https:// | ||
Line 167: | Line 167: | ||
More about SLURM opportunities you can read [[https:// | More about SLURM opportunities you can read [[https:// | ||
+ | ====== Interactive Tasks (SLURM) ====== | ||
+ | |||
+ | Interactive tasks can be done with the //srun// command: | ||
+ | |||
+ | < | ||
+ | $ srun --pty $SHELL | ||
+ | </ | ||
+ | |||
+ | The above command will connect you to the compute node environment assigned to SLURM and allow you to directly run and debug programs on it. | ||
+ | |||
+ | After the commands are done disconnect from the compute node with the command | ||
+ | |||
+ | < | ||
+ | $ exit | ||
+ | </ | ||
+ | |||
+ | If you want to run graphical programs, you need to connect to **ssh -X** to **uosis.mif.vu.lt** and **hpc**: | ||
+ | |||
+ | < | ||
+ | $ ssh -X uosis.mif.vu.lt | ||
+ | $ ssh -X hpc | ||
+ | $ srun --pty $SHELL | ||
+ | </ | ||
+ | |||
+ | In **power** cluster interactive tasks can be performed with | ||
+ | |||
+ | < | ||
+ | $ srun -p power --mpi=none --pty $SHELL | ||
+ | </ | ||
+ | |||
+ | ====== GPU Tasks (SLURM) ====== | ||
+ | |||
+ | To use GPU you need to specify additionally < | ||
+ | |||
+ | With '' | ||
+ | |||
+ | Example of an interactive task with 1 GPU: | ||
+ | < | ||
+ | $ srun -p gpu --gres gpu --pty $SHELL | ||
+ | </ | ||
+ | |||
+ | ====== Introduction to OpenMPI ====== | ||
+ | |||
+ | Ubuntu 18.04 LTS is the packet of **2.1.1** OpenMPI version. | ||
+ | To use the newer version **4.0.1** you need to use | ||
+ | < | ||
+ | module load openmpi/4.0 | ||
+ | </ | ||
+ | before running MPI commands. | ||
+ | |||
+ | ===== MPI Compiling Programs ===== | ||
+ | |||
+ | An example of a simple MPI program is in the directory ''/ | ||
+ | |||
+ | < | ||
+ | $ mpicc -o foo foo.c | ||
+ | $ mpif77 -o foo foo.f | ||
+ | $ mpif90 -o foo foo.f | ||
+ | </ | ||
+ | ===== Implementation of MPI Programmes ===== | ||
+ | |||
+ | MPI programs are started with **mpirun** or **mpiexec**. You can learn more about them with the **man mpirun** or **man mpiexec** command. | ||
+ | |||
+ | A simple (SPMD) program can be started with the following mpirun command line. | ||
+ | |||
+ | < | ||
+ | $ mpirun foo | ||
+ | </ | ||
+ | |||
+ | All allocated processors will be used according to the number ordered. If you want to use less, you can specify the -np quantity parameter in **mpirun**. It is not recommended to use less CPU than reserved for a longer time period, as unused CPUs remain free. | ||
+ | |||
+ | **ATTENTION** It is strictly forbidden to use more CPU than you have reserved, as this may affect the performance of other tasks. | ||
+ | |||
+ | Find more information on [[https:// | ||
+ | |||
+ | ====== Task Efficiency ====== | ||
+ | |||
+ | * Please use at least 50% of the ordered CPU quantity. | ||
+ | * Using more CPUs than ordered will not improve performance, | ||
+ | * If you use the '' | ||
+ | |||
+ | ====== The Limits of Resources ====== | ||
+ | |||
+ | If your tasks don't start because of **AssocGrpCPUMinutesLimit** or **AssocGrpGRESMinutes**, | ||
+ | |||
+ | //The first way to see how much resources are used:// | ||
+ | |||
+ | < | ||
+ | sreport -T cpu, | ||
+ | </ | ||
+ | |||
+ | Where the **USERNAME** - is your MIF user name. **Start** and **End** show the start and end days of the current month. You can specify them also by '' | ||
+ | |||
+ | **NOTE** Usage of resources is given in minutes, divide the number by 60 to get hours. | ||
+ | |||
+ | //The second way to see how much resources are used:// | ||
+ | |||
+ | < | ||
+ | sshare -l -A USERNAME_mif -p -o GrpTRESRaw, | ||
+ | </ | ||
+ | |||
+ | Where **USERNAME** is your MIF user name. Or specify the account whose usage you want to see in **-A**. The data is also displayed in minutes: | ||
+ | * **GrpTRESRaw** - how much is used. | ||
+ | * **GrpTRESMins** - what is the limit. | ||
+ | * **GGRTRESRunMins** - the remaining resources for tasks that are still running. | ||
+ | |||
+ | ====== The Links ====== | ||
+ | |||
+ | * [[waldur|HPC Waldur portal description]] | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[http:// | ||
+ | * [[pagalba@mif.vu.lt]] - registration of the **HPC** problems. | ||
en/hpc.txt · Last modified: 2024/02/21 12:50 by rolnas