Parallel computing is what HPC is really all about: processing things on more than one processor at once. By now, you should have read all of the previous tutorials. Here, we learn about the different parallel paradigms (shared memory, message passing, OpenMP, MPI, array jobs, etc.) and see how they are used and configured in Slurm.
[ Ссылка ]
00:00 Parallel programming models
10:00 User perspective, necessary support in programs
14:28 Embarrassingly parallel via array jobs
14:57 Running multithreaded/multiprocess applications
21:49 Message passing and MPI
30:50 Discussion of exercises
-----
This is part of the Aalto Scientific Computing "Getting started with Scientific Computing" and "HPC Kickstart" workshop. The videos are available to everyone, but may be most useful to the people who attended the workshop and want to review later.
Playlist: [ Ссылка ]
Workshop webpage day 1: [ Ссылка ]
Workshop webpage day 2-3: [ Ссылка ]
Aalto Scientific Computing: [ Ссылка ]
CodeRefinery: [ Ссылка ]
![](https://i.ytimg.com/vi/GHbrpg75qbQ/mqdefault.jpg)