Trung Pham-Dinh, Bao Bach-Gia, Lam Luu-Trinh, Minh Nguyen-Dinh, Hai Pham-Duc, Khoa Bui-Anh, Xuan-Quang Nguyen, Cuong Pham-Quoc
Hardware-based acceleration is an extensive attempt to facilitate many computationally-intensive mathematics operations. Thispaper proposes an FPGA-based architecture to accelerate the convolution operation - a complex and expensive computing step that appears in many Convolutional Neural Network models. We target the design to the standard convolution operation, intending to launch the product as an edge-AI solution. The project’s purpose is to produce an FPGA IP core that can process a convolutional layer at a time. System developers can deploy the IP core with various FPGA families by using Verilog HDL as the primary design language for the architecture. The experimental results show that our single computing core synthesized on a simple edge computing FPGA board can offer 0.224 GOPS. When the board is fully utilized, 4.48 GOPS can be achieved.
Keywords: Convolution Operation, FPGA, Hardware Acceleration, IP core, Edge Computing
Our paper is accepted by The First International Conference on Intelligence of Things (ICIT 2022) (Paper Id: 49)