Date
Journal Title
Journal ISSN
Volume Title
Publisher
Complex deep learning objectives such as object detection and saliency, semantic segmentation, sequence-to-sequence translation, and others have given rise to training processes requiring increasing amounts of time and computational resources. Human-in-the-loop solutions have addressed this problem in several ways; one such pain point is model hyperparameter search. Common methods of parameter search have high time costs and require iterative training of several models. Several algorithms have been proposed to manipulate a neural network's architecture and alleviate this cost. However, these algorithms require tuning of parameters to achieve desired performance and provide little to no intuition as to how such a change may affect overall performance. In this thesis, I present EigenRemove and WeakExpansion for removal and addition of weights providing a human-in-the-loop solution to the architecture search problem in both classical feedforward and convolutional neural network layers. EigenRemove yields results comparable to or better than the more popular Minimum Weight Selection pruning strategy, producing final test accuracies increased by 2-3% at larger compressions on the VGG16 object detection network. WeakExpand is compared with a trivial Zero Weight Expansion approach, where new connections are assigned no weight. WeakExpand is shown to produce final test accuracies in VGG16 comparable to that of Zero Weight Expansion, while providing new trainable weights rather than the dead weights produced by Zero Weight Expansion. Finally, I propose heuristics outlining how a user may use WeakExpand and EigenRemove to have a desired effect based on the current state of their network's training.