The Qx-coder
M.J. Slattery, Joan L. Mitchell
IBM J. Res. Dev
This paper describes a set of feedforward neural network learning algorithms based on classical quasi-Newton optimization techniques which are demonstrated to be up to two orders of magnitude faster than backward-propagation. Then, through initial scaling of the inverse Hessian approximate, which makes the quasi-Newton algorithms invariant to scaling of the objective function, the learning performance is further improved. Simulations show that initial scaling improves the rate of learning of quasi-Newton-based algorithms by up to 50%. Overall, more than two to three orders of magnitude improvement is achieved compared to backward-propagation. Finally, the best of these learning methods is used in developing a small writer-dependent online handwriting recognizer for digits (0 through 9). The recognizer labels the training data correctly with an accuracy of 96.66%.
M.J. Slattery, Joan L. Mitchell
IBM J. Res. Dev
Fan Zhang, Junwei Cao, et al.
IEEE TETC
Yigal Hoffner, Simon Field, et al.
EDOC 2004
Robert G. Farrell, Catalina M. Danis, et al.
RecSys 2012