Show simple item record

dc.contributor.advisorHeisterkamp, Douglas
dc.contributor.authorYang, Zhuxi
dc.date.accessioned2021-05-25T20:32:21Z
dc.date.available2021-05-25T20:32:21Z
dc.date.issued2020-12
dc.identifier.urihttps://hdl.handle.net/11244/329943
dc.description.abstractAlthough Generative Adversarial Networks (GANs) have achieved much success in various unsupervised learning tasks, their training is unstable. Another limitation of GAN and deep neural networks in general is their lack of interpretability. To help address these gaps, we aim to improve training stability of GAN and interpretability of deep learning models. To improve stability of GAN, we propose a Stable Neighbor Match (SNM) training. SNM searches for a stable match between generated and real samples, and then approximates a Wasserstein distance based on the stable match. Our experimental results show that SNM is a stable and effective training method for unsupervised learning. To develop more explainable neural components, we propose an interpretable architecture called the Choice Cell (CC). An advantage of CC is that its hidden representation can be reduced to intuitive interpretation of probability distribution. We then combine CC with other subgenerators to build the Choice Generator (CG). Experimental results indicate that CG is not only more explainable but also maintains comparable performance with other popular generators. In addition, to help subgenerators of CG learn more homogeneous representations, we apply within and between subgenerator regularization to the training of CG. We find that regularization improves the performance of CG in learning imbalanced data. Finally, we extend CC to an interpretable conditional model called the Conditional Choice Cell (CCC). The results indicate the potential of CCC as an effective conditional model with an advantage of being more explainable.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleChoice cell architecture and stable neighbor match training to increase interpretability and stability of deep generative modeling
dc.contributor.committeeMemberMayfield, Blayne
dc.contributor.committeeMemberCrick, Christopher
dc.contributor.committeeMemberLiang, Ye
osu.filenameYang_okstate_0664D_16927.pdf
osu.accesstypeOpen Access
dc.type.genreDissertation
dc.type.materialText
thesis.degree.disciplineComputer Science
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record