Show simple item record

dc.contributor.advisorMcGovern, Amy
dc.contributor.authorJabr, Khaled
dc.date.accessioned2018-12-14T22:22:38Z
dc.date.available2018-12-14T22:22:38Z
dc.date.issued2018-12-14
dc.identifier.urihttps://hdl.handle.net/11244/316799
dc.description.abstractGenerative Adversarial Networks (GANs) are a subclass of deep generative models that aim to implicitly learn to model a data distribution. While GANs have gained wide research attention, and achieved much success, when trained with first-order stochastic gradient descent (SGD), they suffer from training instabilities, such as non-convergence and mode collapse, in which they fail to converge to the Nash equilibrium of the minimax game, and fail to learn all the modes of the data distribution, where the samples of the generator lack diversity. To this end, this thesis investigates the use of evolution strategies (ES) to train GANs, and address the mode collapse issue. The evolution strategies (ES) algorithm used in this work is simplified version of natural evolution strategies (NES). ES achieved very impressive and competitive results against state of the art SGD-based deep reinforcement learning (RL) algorithms. A quality diversity hybrid of ES, known as Novelty Seeking Reward Evolution Strategies (NSR-ES), that aims to encourage exploration and diversity is particularly interesting in relation to the mode collapse problem is also used. In this work we propose two algorithms to train GANs, ES-GAN and NSR-ES-GAN, and we carryout experimentation on a constrained GAN setup where mode collapse is well known to study how our algorithms can help overcome the issue. Our results show that using ES and NSR-ES to train GANs fails to overcome the mode collapse issue, and suggests that more robust and domain specific techniques are needed to overcome the problem.en_US
dc.languageenen_US
dc.subjectGenerative Adversarial Networksen_US
dc.subjectEvolution Strategiesen_US
dc.subjectNovelty Searchen_US
dc.subjectNeuroevolutionen_US
dc.titleUsing Novelty Seeking Reward Evolution Strategies to Train Generative Adversarial Networksen_US
dc.contributor.committeeMemberHougen, Dean
dc.contributor.committeeMemberFagg, Andrew
dc.date.manuscript2018-12-14
dc.thesis.degreeMaster of Scienceen_US
ou.groupGallogly College of Engineering::School of Computer Scienceen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record