Alexander Ertl
Supervisor(s): Markus Steinberger
TU Graz
Abstract: Ever larger networks with parameters in the order of the hundreds of millions are required to fit
increasingly complex and expansive datasets. In conjunction with ubiquitous machine learning applications
on mobile or embedded platforms, this makes efficiency a vital property of artificial neural networks.
Therefore we build upon work on replacing fully connected dense layers with trainable, evolving sparse layers in
CSR encoding. This allows us to train networks at sparsity levels of up to 97% while considerably reducing the memory
footprint as well as the number of computations thereby indicating
that GPU accelerated sparse layers are a viable alternative to dense layers.
Keywords: Computer Vision, Graphics Hardware
Full text: Year: 2021