Skip to content

Commit

Permalink
Update 10-further-mpi-topics.md
Browse files Browse the repository at this point in the history
  • Loading branch information
csccva authored May 24, 2024
1 parent 8bb27a3 commit ab21fc7
Showing 1 changed file with 0 additions and 130 deletions.
130 changes: 0 additions & 130 deletions mpi/docs/10-further-mpi-topics.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,136 +145,6 @@ MPI_Request_free (&recv_req); MPI_Request_free (&send_req);
![](img/one-sided-epoch.png)
</div>
# Key MPI functions for one-sided communication {.section}
# Creating an window {.split-definition}
`MPI_Win_create(base, size, disp_unit, info, comm, win)`
: `base`{.input}
: (pointer to) local memory to expose for RMA
`size`{.input}
: size of a window in bytes
`disp_unit`{.input}
: local unit size for displacements in bytes
`info`{.input}
: hints for implementation
`comm`{.input}
: communicator
`win`{.output}
: handle to window
- The window object is deallocated with `MPI_Win_free(win)`
# Starting and ending an epoch
`MPI_Win_fence(assert, win)`
: `assert`{.input}
: optimize for specific usage. Valid values are "0", `MPI_MODE_NOSTORE`,
`MPI_MODE_NOPUT`, `MPI_MODE_NOPRECEDE`, `MPI_MODE_NOSUCCEED`
`win`{.input}
: window handle
- Used both for starting and ending an epoch
- Should both precede and follow data movement calls
- Collective, barrier-like operation
# Data movement: Put {.split-definition}
`MPI_Put(origin, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)`
: `origin`{.input}
: (pointer to) local data to be sent to target
`origin_count`{.input}
: number of elements to put
`origin_datatype`{.input}
: MPI datatype for local data
`target_rank`{.input}
: rank of the target task
`target_disp`{.input}
: starting point in target window
`target_count`{.input}
: number of elements in target
`target_datatype`{.input}
: MPI datatype for remote data
`win`{.input}
: RMA window
# Data movement: Get {.split-definition}
`MPI_Get(origin, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)`
: `origin`{.input}
: (pointer to) local buffer in which to receive the data
`origin_count`{.input}
: number of elements to get
`origin_datatype`{.input}
: MPI datatype for local data
`target_rank`{.input}
: rank of the target task
`target_disp`{.input}
: starting point in target window
`target_count`{.input}
: number of elements from target
`target_datatype`{.input}
: MPI datatype for remote data
`win`{.input}
: RMA window
# Data movement: Accumulate {.split-def-3}
`MPI_Accumulate(origin, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)`
: `origin`{.input}
: (pointer to) local data to be accumulated
`origin_count`{.input}
: number of elements to put
`origin_datatype`{.input}
: MPI datatype for local data
`target_rank`{.input}
: rank of the target task
`target_disp`{.input}
: starting point in target window
`target_count`{.input}
: number of elements for target
`target_datatype`{.input}
: MPI datatype for remote data
`op`{.input}
: accumulation operation (as in `MPI_Reduce`)
`win`{.input}
: RMA window
# Simple example: Put
```c
Expand Down

0 comments on commit ab21fc7

Please sign in to comment.