arXiv (Cornell University)
Optimizing Deep Neural Networks through Neuroevolution with Stochastic Gradient Descent
December 2020 • Haichao Zhang, Kuangrong Hao, Lei Gao, Bing Wei, Xue‐song Tang
Deep neural networks (DNNs) have achieved remarkable success in computer vision; however, training DNNs for satisfactory performance remains challenging and suffers from sensitivity to empirical selections of an optimization algorithm for training. Stochastic gradient descent (SGD) is dominant in training a DNN by adjusting neural network weights to minimize the DNNs loss function. As an alternative approach, neuroevolution is more in line with an evolutionary process and provides some key capabilities that are of…