Dramatic speedup in CPU sparse network evaluation

187 views
Skip to first unread message

James Bowery

unread,
Jul 15, 2021, 11:40:30 PM7/15/21
to Hutter Prize
This Numenta video describes an order of magnitude speedup in the CPU evaluation of sparse networks, under what appear to be reasonable constraints. 

James Bowery

unread,
Oct 9, 2021, 11:53:24 PM10/9/21
to Hutter Prize
Abstract
Recurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.
...
Reply all
Reply to author
Forward
0 new messages