Show simple item record

dc.contributor.advisorHougen, Dean
dc.contributor.authorMorgan, Brandon
dc.date.accessioned2022-07-29T16:14:05Z
dc.date.available2022-07-29T16:14:05Z
dc.date.issued2022-08
dc.identifier.urihttps://hdl.handle.net/11244/336288
dc.description.abstractNASNet and AmoebaNet are state-of-the-art neural architecture search systems that were able to achieve better accuracy than state-of-the-art human-made convolutional neural networks. Despite the innovation of the NASNet search space, it lacks the ability to express flexibility in terms of optimizing non-convolutional operation layers, such as batch normalization, activation, and dropout. These layers are hand designed by the architect prior to optimization, limiting the exploration possible for model architectures by narrowing down the search space. In addition, the NASNet search space can not allow for many non-classical optimization techniques to be applied as it lacks the ability to be expressed in a fixed-length, floating-point, multidimensional array. Lastly, both NASNet and AmoebaNet use an extensive amount of computation, both evaluating 20,000 models during optimization, consuming 2,000 GPU hours worth of computation. This work addresses these limitations by, first, changing the NASNet search space to include optimization of non-convolutional operation layers through the addition of a building block that allows for the optimization for the order and inclusion of these layers; second, proposing a fixed-length, floating-point, multidimensional array representation to allow other non-classical optimization techniques, such as particle swarm optimization, to be applied; and third, proposing an efficient genetic algorithm, while using state of-the-art techniques to reduce training comiv plexity. After only 1,300 models evaluated, consuming 190 GPU hours, evolving on the CIFAR-10 benchmark dataset, the best model configuration yielded a test accuracy of 94.6% with only 1.3 million parameters, and a test accuracy of 95.09% with only 5.17 million parameters, outperforming both ResNet110 and WideResNet. When transferring to the CIFAR-100 benchmark dataset, the best model configuration yielded a test accuracy of 71.1% with only 1.3 million parameters, and a test accuracy of 76.53% with only 5.17 million parameters.en_US
dc.languageenen_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectNeural Architecture Searchen_US
dc.subjectGenetic Algorithmsen_US
dc.subjectParticle Swarm Optimizationen_US
dc.titleEfficient Neural Architecture Search using Genetic Algorithmen_US
dc.contributor.committeeMemberPan, Chongle
dc.contributor.committeeMemberDiochnos, Dimitrios
dc.date.manuscript2022-07
dc.thesis.degreeMaster of Scienceen_US
ou.groupGallogly College of Engineering::School of Computer Scienceen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


Attribution-NonCommercial-ShareAlike 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International