-
Notifications
You must be signed in to change notification settings - Fork 1
2016 05 16 webex
- Discussion points for today:
- Potential liberating thought: what if we don't need dynamic groups?
- Has the FT WG discussed the universal naming mechanism idea? (slide 31)
- Paired with the idea of: names are private until I publish them
- Do we have a use case for getting group names to which you are not a member?
- Still unclear: do we want to create win/file from group? (slides 43/44)
- Session init:
- Bool param + info key for specifying concurrent access or not, and granularity
- Returns MPI_ERR_UNSUPPORTED_OPERATION if the thread level cannot be provided (obviates need for query function)
Much discussion about two points:
-
New proposal for MPI_SESSION_INIT and thread levels:
-
Have "bool concurrent_access" as a param to MPI_SESSION_INIT. true==MPI_THREAD_MULTIPLE, false=MPI_THREAD_SINGLE, FUNNELED, or SERIALIZED.
-
App can use an info key to select between SINGLE/FUNNELED/SERIALIZED.
-
But what is the default?
-
If we make SERIALIZED the default, is that imposing a performance penality? If we make SINGLE the default, is that too restrictive (and unrealistic, since you can't guarantee that threads won't be created in the future)?
-
Or do we keep it like MPI-3.1, and let the implementation return whatever thread level it wants to?
-
...at which point (since the app will have to query to see what thread level it goes), what is the "win" of having bool concurrent_access+and info key to specify which non-concurrent model the app wants? It seems like we're back to having a single int with 4 enum-like values -- just like MPI-3.1.
-
At this point: added required/provided to MPI_SESSION_INIT, just like MPI-3.1.
-
Perhaps we can rename (i.e., add aliases for) to help user education:
- MPI_THREAD_SINGLE -> MPI_NONCONCURRENT_SINGLE
- MPI_THREAD_FUNNELED -> MPI_NONCONCURRENT_FUNNELED
- MPI_THREAD_SERIALIZED -> MPI_NONCONCURRENT_SERIALIZED
- MPI_THREAD_MULTIPLE -> MPI_CONCURRENT
-
No one has any better ideas at this point.
-
-
Is there a use case for dynamic sets? I.e., where the membership of a set can change over time?
- No one can think of a use case for sets (or MPI_Groups or any other MPI object) changing membership over time.
- Instead, everyone seems to be ok with:
- Create a new set / MPI_Group / etc.
- Delete the old set / MPI_Group / etc.
- It is up to the implementations to make these operations not suck, performance-wise
- The use cases we want to support are:
- Applications can shrink and grow
- Applications can already grow with MPI_COMM_SPAWN / MPI_EXEC
- There are some implications for shrinking, though -- see new slides for 30 May 2016
- Fault-tolerant scenarios (see FT WG)
- Applications can shrink and grow