Nebula: Lightweight Neural Network Benchmarks
The evolution of computing systems and explosive data production propelled the advance of machine learning. As neural networks become increasingly important applications, developing neural network benchmarks has emerged as an imminent engineering challenge. Recent neural networks tend to form deeper networks to enhance accuracy and applicability, but such approaches impose great challenges on the modeling, simulation, and analysis of computing systems since they require an unbearably long execution time to process a large number of operations and sizable data. Neural networks are mostly comprised of matrix and vector calculations that repeat numerous times on multi-dimensional data across channels, layers, batches, etc. This observation motivates us to develop a lightweight neural network benchmark suite named Nebula.

The Nebula suite is built on a C++ framework and currently consists of seven representative neural networks including ResNet, VGG, AlexNet, MLP, DBN, LSTM, and RNN. We plan to add more neural network models to the pool in future releases, including MobileNet, YOLO, FCN, and GAN. Inspired by popular benchmark suites such as PARSEC and SPLASH-3 that provide users with options to choose a different input size per benchmark, Nebula offers multiple size options from large to small datasets for various types of neural networks. The large ones represent full-fledged neural networks that implement complete structures of the neural networks and execute on massive datasets, and the medium and small benchmarks are downsized representations of full-fledged networks. The benchmarks are implemented by formulating variable-sized datasets and compacting neural networks to support datasets of different sizes.

The lightweight benchmarks aim at modeling the proxy behaviors of full-fledged neural networks to alleviate the challenges of executing hefty neural network workloads. Nebula benchmarks as “proxy apps” intend to reduce the computational costs of full-fledged networks but still capture end-to-end neural network behaviors. We hope the multi-size options of Nebula benchmarks broaden the usability and affordability of them in diverse experiment environments from real hardware to architecture simulations in which users can selectively use the appropriate size of benchmarks.

 

Prerequisite, Download, and Build
Nebula uses g++ and nvcc to compile C++ and CUDA codes for executions on CPUs and NVIDIA GPUs. The Nebula benchmark suite requires g++-5.4, nvcc-9.0, opencv-3.2, openblas-0.2 or any later versions of these, cublas from nvidia-384 or later, and cudnn from version 7.0.4 to 7.6.5. For example, in Ubuntu, the following command installs the prerequisite packages.

$ sudo apt-get install build-essential g++ nvcc libopenblas-dev libopencv-dev

Prior to installing CUDA libraries, make sure a GPU has proper driver support. If the GPU has not been configured, download an installation package in the link: https://developer.nvidia.com/cuda-toolkit-archive, and execute the installer with a sudo privilege such as the following command. The cublas library is installed as a part of the driver package.

$ sudo ./cuda_9.0.176_384.81_linux-run

To install a cudnn library, download an installer from the link: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html. Then, execute the following commands to install the library.

$ tar xf cudnn-10.2-linux-x64-v7.6.5.32.tgz
$ sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*

The latest release of the Nebula benchmark suite is v1.4.1 (as of Nov., 2020). Use the following git command to obtain this version of Nebula. Alternatively, you may get the latest stable copy of the Nebula framework from the master branch without the ‑‑branch option in the command below.

$ git clone --branch v1.4.1 https://github.com/yonsei-icsl/nebula

To build all the Nebula benchmarks, execute the nebula.sh script in the main directory as follows.

$ cd nebula/
$ ./nebula.sh build all

The following example invokes the training executions of full-sized ResNet.

$ ./nebula.sh train resnet large

 

Documentation
For detailed instructions regarding the installation and use of the Nebula benchmark suite, visit the GitHub repository: https://github.com/yonsei-icsl/nebula, and refer to the README file. To reference the Nebula benchmark suite, please use our TC paper.

@article{kim_tc2021,
    author  = {B. Kim and S. Lee and C. Park and H.Kim and W. Song},
    title   = {{The Nebula Benchmark Suite: Implications of Lightweight Neural Networks}},
    journal = {IEEE Transactions on Computers},
    volume  = {70},
    number  = {11},
    month   = {Nov.},
    year    = {2021},
    pages   = {1887-1900},
}