-
Notifications
You must be signed in to change notification settings - Fork 2
Notes 2016 12 06
Kathryn Mohror edited this page Dec 7, 2016
·
1 revision
-
Marc-Andre
-
Jean Baptiste Besnard
-
Nathan Hjelm
-
Martin Schulz
-
Kathryn Mohror
-
David Solt
-
Lena Oden
-
Esthela Gallardo
-
Anh Vo
-
Jim Dinan
-
Soren Rasmussen
-
Agenda
- Tools might want to copy entire buffer with padding in event callback
- How will tools attach/detach additional event buffers (user space buffers)
- Timestamps, what structure should MPI_T time have?
- Asych signal safety
- For Rice group, will join us in January
- Sessions interactions tools
- Also, can we use PMIx as a broker between tool layers, define variables that MPI implementations must support to be compliant (for some definition of compliant)
- Or do we want to define some general MPI T variables?
-
Event_register
- Do we want to enforce an ordering on event notification based on the order in which tools register for an event?
- Multiple tools register in some order, should events be raised in that order?
- Seems hard to enforce, because the tools would have to enforce an ordering on registration
- Our ideas for enforcing ordering between tools seems to be wandering into PMPI2 territory
- We should shelve this and talk about it later
-
Event_read
-
How to get all event data in one read?
- Is there a way to get it without doing a series of reads?
- Or maybe read based on index into the event structure?
- Index looks like the way to go for now, skip over reading things we don't care about
- Should also probably drop datatype parameter to be equivalent to MPIT read where the datatype is implied, symmetric
- In event_get_info
- Don't need the element count because we can get the number of elements from enum_get_info
-
Event ordering
- We want events to be delivered in order
- But what if we don't get the events in order from the hardware/network software
- Could be multithreaded network software processing events in some order but they could be completed not in a total ordering. Time wouldn't necessarily be synchronized across cores
- Need a normalizing function, best effort function in the MPI library that tries to resolve time issues
- Could do this in the event_get_time routine
- It's going to be hard to support this for implementations. The thought is that even though the events are supposed to be defined by implementations, the community will probably define what it wants from implementations and then "force" implementation support
- Timers should be easy to figure out
-
What about dropped events
- Is the event type for dropped events defined in spec or implementation defined?
-
What did we decide?
- We'll take what we can get
- Implementation is best effort
- Events of an id will be delivered in order (as much as possible by MPI lib)
- Event ordering makes most sense and can only be ordered in MPI library What if we leave the question of ordering until after we get a couple years experience with it?