#Using processor-cache affinity in shared memory multiprocessor scheduling
- Take advantage of processor cache to schedule tasks
- Investigate how affinity affects schedulers
- How fast the task runs given heterogeneous processors (uncommon)
- How the processor cache affects processor performance
- Processor may have resources (I/O..) associated with it
- The burst of cache misses that happens on the first run of a task
- Group of cache blocks used by task
- Measurements in this paper do not take into account false data sharing
- First come first served
- Fixed processor
- Last processor
- The last processor that ran a task is where it'll run the next time
- Minimum intervening
- Before scheduling, compute the number of tasks waiting to run in all processors
- Task will be scheduled to the processor with the minimum number of waiting tasks
- Limited minimum intervening
- Same as MI but limiting the number of tasks a processor can have affinity with
- LMI routing
- Same as LMI, but the scheduling decision happens when task becomes schedulable, not when processor is idle
- For each processor, compute the sum of all tasks in queue + sum of intervening tasks
- Task gets scheduled for the minimum of the measurement above
- LMIR very similar to FP - which provides usually the best throughput
- However significant differences between LMIR and other MI policies only show up when the number of tasks is very high.