Nebula features a fully open-source implementation in C++, providing users with a transparent, efficient, and versatile framework. The benchmark suite currently includes seven representative neural networks – ResNet, VGG, AlexNet, MLP, DBN, LSTM, and RNN – with plans to expand the collection to include MobileNet, YOLO, Inception, and more. Inspired by popular benchmark suites like PARSEC and SPLASH-3, Nebula provides multiple dataset size options, ranging from large, full-fledged networks to medium and small proxies, offering flexibility for diverse experimental environments.
Initially, Nebula was designed to create lightweight neural network benchmarks that capture the key behaviors of neural networks while providing scalable options to suit various hardware and simulation needs at low computational overhead. However, the key appeal of the Nebula benchmark suite has evolved to be its robust C++ foundation and open-source accessibility, making it a valuable tool for users aiming to simulate, analyze, and experiment with neural networks effectively. Therefore, future updates to Nebula will expand its collection only with full-sized networks, removing lightweight proxies from its offerings.
Prerequisite, Download, and Build
Nebula uses g++ and nvcc to compile C++ and CUDA codes for executions on CPUs and NVIDIA GPUs. The Nebula benchmark suite requires g++-5.4, nvcc-9.0, opencv-3.2, openblas-0.2 or any later versions of these, cublas from nvidia-384 or later, and cudnn from version 7.0.4 to 7.6.5. For example, in Ubuntu, the following command installs the prerequisite packages.
$ sudo apt-get install build-essential g++ nvcc libopenblas-dev libopencv-dev
Prior to installing CUDA libraries, make sure a GPU has proper driver support. If the GPU has not been configured, download an installation package in the link: https://developer.nvidia.com/cuda-toolkit-archive, and execute the installer with a sudo privilege such as the following command. The cublas library is installed as a part of the driver package.
$ sudo ./cuda_9.0.176_384.81_linux-run
To install a cudnn library, download an installer from the link: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html. Then, execute the following commands to install the library.
$ tar xf cudnn-10.2-linux-x64-v7.6.5.32.tgz $ sudo cp cuda/include/cudnn*.h /usr/local/cuda/include $ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 $ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
The latest release of the Nebula benchmark suite is v1.4.1 (as of Nov., 2020). Use the following git command to obtain this version of Nebula. Alternatively, you may get the latest stable copy of the Nebula framework from the master branch without the ‑‑branch option in the command below.
$ git clone --branch v1.4.1 https://github.com/yonsei-icsl/nebula
To build all the Nebula benchmarks, execute the nebula.sh script in the main directory as follows.
$ cd nebula/ $ ./nebula.sh build all
The following example invokes the training executions of full-sized ResNet.
$ ./nebula.sh train resnet large
Documentation
For detailed instructions regarding the installation and use of the Nebula benchmark suite, visit the GitHub repository: https://github.com/yonsei-icsl/nebula, and refer to the README file. To reference the Nebula benchmark suite, please use our TC paper.
@article{kim_tc2021, author = {B. Kim and S. Lee and C. Park and H.Kim and W. Song}, title = {{The Nebula Benchmark Suite: Implications of Lightweight Neural Networks}}, journal = {IEEE Transactions on Computers}, volume = {70}, number = {11}, month = {Nov.}, year = {2021}, pages = {1887-1900}, }