A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration for AdderNet.
Fig 1. Visualization of features in AdderNets and CNNs. [1]Fig 2. Visualization of features in different neural networks on MNIST dataset. [3]
-
Github discussions ๐ฌ or issues ๐ญ
-
QQ Group: 697948168 (password๏ผAccANN)
-
Email: yidazhang#gmail.com
[1] AdderNet: Do We Really Need Multiplications in Deep Learning? Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu. CVPR, 2020. [๐paper | code]
[2] AdderSR: Towards Energy Efficient Image Super-Resolution. Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, Dacheng Tao. Arxiv, 2020. [๐paper | code]
[3] ShiftAddNet: A Hardware-Inspired Deep Network. Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin. NeurIPS, 2020. [๐paper | code]
[4] Kernel Based Progressive Distillation for Adder Neural Networks. Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing XU, Yunhe Wang. NeurIPS, 2020. [๐paper | code]
[5] GhostNet: More Features from Cheap Operations [๐paper | code]
[6] MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [๐paper | code]
[7] VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing. Qian Zhang, Jianjun Li, Meng Yao. [๐paper | code]
[8] And the bit goes down: Revisiting the quantization of neural networks (ICLR 2020). Pierre Stock, Armand Joulin, Remi Gribonval. [๐paper | code]
[9] DNNBuilder: an Automated Tool for Building High-Performance DNN Hardware Accelerators for FPGAs [๐paper | code]
[10] AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence. Yunhe Wang, Mingqiang Huang, Kai Han, et.al. [๐paper | code]
[11] PipeCNN: An OpenCL-Based Open-Source FPGA Accelerator for Convolution Neural Networks. FPT 2017. Dong Wang, Ke Xu and Diankun Jiang. [๐paper | code]