Paul Eibensteiner
Supervisor(s): Dipl.-Ing. Dr. techn. Markus Steinberger
TU Graz
Abstract: Sparse neural networks are successfully used to speed up inference and reduce the memory requirements of fully trained networks. However, recently it has been shown that sparsity can also be employed in the training phase. In this work, we introduce two new methods to train sparse neural networks from scratch that alter the network's topology while training and maintain a global level of density. Then we compare them to recent state-of-the-art algorithms in a controlled setting on three different datasets. All algorithms are implemented in a GPU-accelerated framework and tested using the KMNIST and HIGGS datasets. The results show that global weight redistribution can significantly improve the network's accuracy without introducing significant overhead.
Keywords: Computer Vision, Graphics Hardware
Full text: Year: 2022