Optimize ConcurrentLru read throughput #645
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
LruItem.WasAccessed
was previouslyvolatile
to ensure that thread A marking an item as accessed is visible to thread B that is cycling the cache. Under the covers,volatile
equates to half fence for reads and writes.From the .NET memory model:
Immediately before calling
ConcurrentLruCore.Cycle
, there is always an interlocked call. We can thus piggy-back on interlocked and avoid the half fences.Without the check in
MarkAccessed
, this does not result in the same throughput boost as #643 because x64 has a strong memory model (the write has release semantics and generates traffic to ensure CPU cache coherence).Before
After