The goal of this work is to design a single-objective and a multiobjective stochastic optimization algorithms for global training of neural networks based on simulated annealing. The algorithms overcome the limitation of local optimization by the conventional gradient-based training methods and perform global optimization of the weights of the neural networks. Especially, the multiobjective training algorithm is designed to enhance generalization capability of the trained networks by minimizing the training error and the dynamic range of the network weights simultaneously. For fast convergence and good solution quality of the algorithms, we suggest the hybrid simulated annealing algorithm with the gradient-based local optimization method.


Generalization root-mean-squared error(RMSE) of multilayer perceptrons(MLPs) trained by the Levenberg-Marquardt(LM) algorithm, the hybrid greedy simulated annealing (HGSA) algorithm, and the multiobjective hybrid greedy simulated annealing(MOHGSA). (a), (b) Three-dimensional perspective plots of the test functions. (c), (d) Final solutions by LM, HGSA and MOHGSA for (a) and (b).

Evolution of the training and the test errors and the sum of squared weight values with respect to the iteration of the LM training. (a) A typical case without overfitting for the first test function. (b) A case with overfitting for the first test function. (c) A typical case with overfitting for the second function.


References

[1] Y. Lee, J.-S. Lee, S.-Y. Lee, and C. H. Park, "Improving Generalization Capability of Neural Networks based on Simulated Annealing", CEC, Sept. 2007.[link]