Training low-rank neural network models has been recently shown to reduce the total number of trainable parameters, while maintaining predictive accuracy, resulting in end-to-end speedups. The catch, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results