From 272f4128056503f0224aa7d7d7ce72fcf03ed4cc Mon Sep 17 00:00:00 2001 From: Tuomas Rossi Date: Thu, 20 Jun 2024 16:51:21 +0300 Subject: [PATCH] Update slides --- gpu-hip/docs/05-multi-gpu.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/gpu-hip/docs/05-multi-gpu.md b/gpu-hip/docs/05-multi-gpu.md index 1665a9de1..0a45e4d45 100644 --- a/gpu-hip/docs/05-multi-gpu.md +++ b/gpu-hip/docs/05-multi-gpu.md @@ -288,20 +288,22 @@ int omp_target_memcpy(void *dst, const void *src, size_t size, size_t dstOffset, * If direct peer to peer access is not available or implemented, the functions should fall back to a normal copy through host memory -# Three levels of parallelism +# Summary {.section} - +# Three levels of parallelism -1. GPU -- GPU threads on the multiprocessors +
+1. GPU: GPU threads * Parallelization strategy: HIP, OpenMP, SYCL, Kokkos, OpenCL -2. Node -- Multiple GPUs and CPUs +2. Node: Multiple GPUs and CPUs * Parallelization strategy: MPI, Threads, OpenMP -3. Supercomputer -- Many nodes connected with interconnect +3. Supercomputer: Many nodes connected with interconnect * Parallelization strategy: MPI between nodes +
-
- -![](img/parallel_regions.png){width=60%} +
+![](img/parallel_regions.png){width=99%} +
# Summary