Show simple item record

dc.contributor.authorWang, Liya
dc.date.accessioned2014-10-01T13:34:48Z
dc.date.available2014-10-01T13:34:48Z
dc.date.issued1995-12-01
dc.identifier.urihttps://hdl.handle.net/11244/12896
dc.description.abstractThis paper presents a new learning algorithm for training fully-connected, feedforward artificial neural networks. The proposed learning algorithm will be suitable for training neural networks to solve approximation problems. The framework of the new ANN learning algorithm is based on Newton's method for solving non-linear least squares problems. To improve the stability of the new learning algorithm, the Levenberg-Marquardt technique for safe-guarding the Gauss-Newton method is incorporated into the Newton method. This damped version of Newton's method has been implemented using FORTRAN 77, along with some other well-known ANN learning algorithms in order to evaluate the performance of the new learning algorithm. Satisfactory numerical results have been obtained. It is shown that the proposed new learning algorithm has a better performance than the other algorithms in dealing with function approximation problems and problems which may require a high precision of training accuracy.
dc.formatapplication/pdf
dc.languageen_US
dc.publisherOklahoma State University
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleDamped Newton Method - an Ann Learning Algorithm
dc.typetext
osu.filenameThesis-1995-W2463d.pdf
osu.accesstypeOpen Access
dc.type.genreThesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record