
Intelligent CIM Compilation: The Best of Both Worlds
Optimizing dual-mode capabilities in Computing-in-Memory accelerators
This research introduces a novel optimization framework for Computing-in-Memory (CIM) accelerators that dynamically switches between compute and memory modes, significantly improving performance for deep neural networks.
- Achieves 21.9% performance improvement over traditional CIM-only compilation approaches
- Introduces a hybrid optimization algorithm that identifies optimal mode configurations
- Implements innovative data flow analysis to maximize memory bandwidth utilization
- Demonstrates real-world effectiveness across multiple DNN architectures
This breakthrough enables hardware engineers to fully leverage CIM accelerators' dual-mode capabilities, addressing a critical gap in existing compiler technologies and offering practical pathways to more efficient AI hardware deployment.
Original Paper: Be CIM or Be Memory: A Dual-mode-aware DNN Compiler for CIM Accelerators