Date
Journal Title
Journal ISSN
Volume Title
Publisher
Deep learning models contain many different hyper-parameters that need to be tuned prior to training. These hyper-parameters greatly influence the quality of the final model. Historically, most attention and research has been performed on tuning the architecture. However, with the advent of automated machine learning and the success from neural architecture search, automated methods have been successfully applied to other components of neural networks, challenging the very inspiration of the classical methodologies. In this work, automated methods, through the use of evolutionary algorithms, were applied to the learning components of a neural network: the loss function, optimizer, learning rate schedule, and output activation function of computer vision models with the goal of finding drop-in replacements for standard components. I expand upon previous research in each of these respective domains through the proposal of new search spaces, surrogate functions, genetic algorithms, and better found components. In the end, multiple loss functions, optimizers, learning rate schedules, and output activation functions, all evolved from scratch, were found to be able to outperform cross-entropy, Adam, one cycle cosine decay, and softmax on the CIFAR datasets.