PDF

Description

In this paper, we extend the lifted neural network framework (described in section 2) to apply to recurrent neural networks (RNNs). As with the general lifted neural network case, the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of a simple (no hidden layer) supervised learning problem that is parallelizable across data points and/or layers. The lifted methodology is particularly interesting in the case of recurrent neural networks because standard methods of optimization on recurrent neural networks perform poorly due to the vanishing and exploding gradient problems. Experiments on toy datasets indicate that our lifted model is more equipped to handle long-term dependencies and long sequences.

Details

Files

Statistics

from
to
Export
Download Full History