Cogprints

Minimisation methods for training feed-forward networks

Smagt, P. van der (1994) Minimisation methods for training feed-forward networks. [Journal (Paginated)]

Full text available as:

[img] Postscript
863Kb

Abstract

Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-forward neural network training is a special case of function minimisation, where no explicit model of the data is assumed. Therefore, and due to the high dimensionality of the data, linearisation of the training problem through use of orthogonal basis functions is not desirable. The focus is on function minimisation on any basis. Quasi-Newton and conjugate gradient methods are reviewed, and the latter are shown to be a special case of error back-propagation with momentum term. Three feed-forward learning problems are tested with five methods. It is shown that, due to the fixed stepsize, standard error back-propagation performs well in avoiding local minima. However, by using not only the local gradient but also the second derivative of the error function a much shorter training time is required. Conjugate gradient with Powell restarts shows to be the superior method.

Item Type:Journal (Paginated)
Keywords:feed-forward neural network training, numerical optimisation techniques, neural function approximation, error back-propagation, conjugate gradient, quasi-Newton
Subjects:Computer Science > Neural Nets
ID Code:497
Deposited By: van der Smagt, Patrick
Deposited On:03 Jul 1998
Last Modified:11 Mar 2011 08:54

Metadata

Repository Staff Only: item control page