-
Notifications
You must be signed in to change notification settings - Fork 5
Home
Welcome to the LruClockCache wiki!
This is a header-only project that implements LRU-CLOCK-second-chance cache algorithm. It uses unordered_map and circular buffers to do book-keeping for get/set operations and cache-misses. Cache misses are handled by the cache but their functions are required to be given as constructor parameters. Whenever a cache slot is evicted, user-given cache-miss functions are called in order to optimize for more RAM-accesses and less backing-store accesses. LRU means least-recently-used eviction policy. Since CLOCK is an approximation, this is not a perfect "least recently" algorithm but is faster than Javascript version (that does 1.5 million lookups per second) of same algorithm because of C++ performance difference (20-50 million lookups per second in C++).
With FX8150 3.6GHz CPU and 1-channel 1600MHz DDR3 RAM:
Multi-threaded coherent single-level read+write cache (N-way set associative) = ~70 M lookups per second in a partially conflicting Gaussian Blur operation
Multi-threaded coherent single-level read+write cache (Direct Mapped) = ~150 M lookups per second in a partially conflicting Gaussian Blur operation
Multi-threaded multi-level read-only cache (Per-Thread Private Direct Mapped + Per-Thread Private Lru Approximation + Multithreaded Shared Direct Mapped LLC) = 2400 M lookups per second in a sequential but non-thrashing access pattern
Single-threaded multi-level read-write cache (Direct Mapped + Lru Approximation) = 400 M lookups per second in Gaussian Blur algorithm