NeuroSpector: Dataflow and Mapping Optimization of Deep Neural Network Accelerators
A number of hardware accelerators have been proposed to speed up deep neural network (DNN) computations and enhance energy efficiency. DNN accelerators offer high-throughput and energy-efficient computing solutions by deploying many processing elements (PEs) and chips in parallel and exploiting data reuse across multiple levels of the memory hierarchy in the accelerators. The vertical and spatial arrangements of buffers and PEs construct a multi-level hierarchy of accelerator components from multiply-accumulate (MAC) units to global buffers and off-chip DRAM. For diverse DNN workload configurations and accelerator implementations, it is a highly challenging problem to find the proper ways of executing neural layers on accelerators to maximize energy efficiency and performance. The challenge lies in that hardware specifications (e.g., number of PEs and chips, buffer sizes and types) associated with workload configurations (e.g., width, height, channel, batch size) produce a huge number of possible dataflow and mapping options that can be exercised in an accelerator.

NeuroSpector is a scheduling optimization framework that systematically analyzes the dataflow and mapping possibilities of DNN workloads in accelerators and rapidly identifies optimal execution schemes. NeuroSpector finds scheduling solutions for a variety of DNN accelerators and workloads 7,958x faster than previous work with only 1.5% energy and cycle differences on average to the optimal schemes, whereas the prior techniques produce hit-or-miss results with 100.1% greater energy and cycle results than the optimal solutions and as much as 14.9x in the worst case. In addition, NeuroSpector supports many essential features of DNN accelerators and workloads, including group convolutions, multi-chip accelerators, data bypassing in buffers, unified/separate buffer types, static power modeling, and network-wise scheduling optimization, which were overlooked or only partly supported in related work.

 

Download and Documentation
The latest release of NeuroSpector is v1.4 (as of Feb. 2023). For detailed instructions regarding the prerequisite, installation, and execution of NeuroSpector, please visit the GitHub repository: https://github.com/yonsei-icsl/neurospector, and refer to the README file.

To reference NeuroSpector, please use our TPDS paper.

@article{park_tpds2023,
    author  = {C. Park and and B. Kim and S. Ryu and W. Song},
    title   = {{NeuroSpector: Systematic Optimization of Dataflow Scheduling in DNN Accelerators}},
    journal = {IEEE Transactions on Parallel and Distributed Systems},
    volume  = {34},
    number  = {8},
    month   = {Aug.},
    year    = {2023},
    pages   = {2279-2294},
}