Loading...
Thumbnail Image

Date

2018-12-14

Journal Title

Journal ISSN

Volume Title

Publisher

Generative Adversarial Networks (GANs) are a subclass of deep generative models that aim to implicitly learn to model a data distribution. While GANs have gained wide research attention, and achieved much success, when trained with first-order stochastic gradient descent (SGD), they suffer from training instabilities, such as non-convergence and mode collapse, in which they fail to converge to the Nash equilibrium of the minimax game, and fail to learn all the modes of the data distribution, where the samples of the generator lack diversity. To this end, this thesis investigates the use of evolution strategies (ES) to train GANs, and address the mode collapse issue. The evolution strategies (ES) algorithm used in this work is simplified version of natural evolution strategies (NES). ES achieved very impressive and competitive results against state of the art SGD-based deep reinforcement learning (RL) algorithms. A quality diversity hybrid of ES, known as Novelty Seeking Reward Evolution Strategies (NSR-ES), that aims to encourage exploration and diversity is particularly interesting in relation to the mode collapse problem is also used. In this work we propose two algorithms to train GANs, ES-GAN and NSR-ES-GAN, and we carryout experimentation on a constrained GAN setup where mode collapse is well known to study how our algorithms can help overcome the issue. Our results show that using ES and NSR-ES to train GANs fails to overcome the mode collapse issue, and suggests that more robust and domain specific techniques are needed to overcome the problem.

Description

Keywords

Generative Adversarial Networks, Evolution Strategies, Novelty Search, Neuroevolution

Citation

DOI

Related file

Notes

Sponsorship

Collections