Show simple item record

dc.contributor.advisorCrick, Christopher
dc.contributor.authorHuynh, Hai
dc.date.accessioned2020-09-09T21:48:17Z
dc.date.available2020-09-09T21:48:17Z
dc.date.issued2020-05
dc.identifier.urihttps://hdl.handle.net/11244/325546
dc.description.abstractDeep neural networks have been widely applied in various fields of many industries such as medical, security, and self-driving cars. They even surpass human performance in image recognition tasks; however, they have a worrying property. Neural networks are vulnerable to extremely small and human-imperceptible perturbations in images that lead them to provide wrong results with high confidence. Moreover, adversarial images that fool one model can fool another even with different architecture as well. Many studies suggested that a reason for this transferability of adversarial samples is the similar features that different neural networks learn; however, this is just an assumption and remains a gap in our knowledge of adversarial attacks. Our research attempted to validate this assumption and provide better insight into the field of adversarial attacks. We hypothesize that if a neural network representation in one model is highly correlated to the neural network representations of other models, an attack on that network representation would yield better transferability. We tested this hypothesis through experiments with different network architectures as well as datasets. The results were sometimes consistent and sometimes inconsistent with the hypothesis.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleInvestigate the effect of neural network representations on the transferability of adversarial attacks
dc.contributor.committeeMemberMayfield, Blayne
dc.contributor.committeeMemberThomas, Johnson
osu.filenameHuynh_okstate_0664M_16756.pdf
osu.accesstypeOpen Access
dc.type.genreThesis
dc.type.materialText
dc.subject.keywordsadversarial attack
dc.subject.keywordsblack-box attack
dc.subject.keywordstransferability
thesis.degree.disciplineComputer Science
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record