A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.
-
Updated
Jan 18, 2024 - C++
A low-latency LRU approximation cache in C++ using CLOCK second-chance algorithm. Multi level cache too. Up to 2.5 billion lookups per second.
Cache Controller for a multi-level Cache memory using four-way set-associative mapping with write-back, no-write allocate and LRU policy. Implemented on a Basys3 Artix-7 FPGA with proper delays and hit signals.
Pipelined Processor which implements RV32i Instruction Set. Also contains pipelined L1 4-way set-associative Instruction Cache, direct-mapped L1 Data Cache, and a 4-way set-associative L2 Victim Cache with a fully-associative 8-entry Victim Buffer. Also has a tournament branch predictor (global and local predictors) and a set-associative BTB.
Simulator for Direct, Associative, Set Associative Mapping Technique in Cache Allocation
C# implementation of a set-associative cache with multiple policies (LRU, LFU, etc.)
The following program here helps in simulating how blocks from main memory can get mapped to cache based on strategies: Direct-Mapping, Fully-Associative, Set-Associative
Storing data in 16-bit multilevel Direct Mapped, Associative, N-way Set Associative cache memory
2-level TLB Controller
A simple implementation of a Direct Mapped Cache and Set Associative Cache in C++. Supports for different sizes of the cache, block, #ways, etc.
Fully parametric Set Associated Cache with a Pseudo Least Recently Used replacement policy implemented in VHDL.
The repository simulates direct-mapped and four way set associative cahe.
Simulation of Set Associative Cache
Add a description, image, and links to the set-associative-cache topic page so that developers can more easily learn about it.
To associate your repository with the set-associative-cache topic, visit your repo's landing page and select "manage topics."