Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)
-
Updated
Feb 21, 2024 - C++
Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)
Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)
MAPLE's hardware-software co-design allows programs to perform long-latency memory accesses asynchronously from the core, avoiding pipeline stalls, and enabling greater memory parallelism (MLP).
Add a description, image, and links to the memory-accelerators topic page so that developers can more easily learn about it.
To associate your repository with the memory-accelerators topic, visit your repo's landing page and select "manage topics."