1039
10
11038
2
43783
document
11038
IJCNN2000_drnn.ps
application/postscript
94316
20091208 20:35:02
http://cogprints.org/1039/2/IJCNN2000_drnn.ps
1039
2
2
application/postscript
application/postscript
public
IJCNN2000_drnn.ps

http://eprints.org/relation/hasVolatileVersion
http://cogprints.org/id/document/4992

http://eprints.org/relation/haspreviewThumbnailVersion
http://cogprints.org/id/document/4992

http://eprints.org/relation/hasVersion
http://cogprints.org/id/document/4992
archive
181
disk0/00/00/10/39
20001018
20110311 08:54:25
20070912 16:36:08
confpaper
show
0
This paper extends previous analysis of the gradient decay to a class of discretetime fully recurrent networks, called Dynamical Recurrent Neural Networks (DRNN), obtained by modelling synapses as Finite Impulse Response (FIR) filters instead of multiplicative scalars. Using elementary matrix manipulations, we provide an upper bound on the norm of the weight matrix ensuring that the gradient vector, when propagated in a reverse manner in time through the errorpropagation network, decays exponentially to zero. This bounds apply to all FIR architecture proposals as well as fixed point recurrent networks, regardless of delay and connectivity. In addition, we show that the computational overhead of the learning algorithm can be reduced drastically by taking advantage of the exponential decay of the gradient.

Aussem
Alex
July 2000
IEEEINNSENNS International Joint Conference on Neural Networks
Como, Italy
pub
Recurrent neural networks, gradient decay, forgetting behavior
577582
FALSE
TRUE
 compscineuralnets
Sufficient Conditions for Error Back Flow Convergence in Dynamical Recurrent Neural Networks
4
published
2000
public