Show simple item record

dc.contributor.advisorHagan, Martin T.
dc.contributor.authorWilliams, Evan Ruck
dc.date.accessioned2017-02-22T22:13:17Z
dc.date.available2017-02-22T22:13:17Z
dc.date.issued2015-05-01
dc.identifier.urihttps://hdl.handle.net/11244/49023
dc.description.abstractHardware implementations of Artificial Neural Network (ANN) architectures cantake advantage of parallelism in the ANN algorithm. Using automated procedures,arbitrary amounts of Field Gate Programmable Array (FPGA) resources can beallocated to calculate arbitrary ANN algorithms. This document analyzes trade-offsbetween the speed and area required as an arbitrary ANN algorithm is computedover arbitrary hardware sizes. Comparing the calculated number cycles and area asthe number of inputs to a neuron and the number of neurons in a layer vary, it isfound that there exists an optimal number of inputs and neurons for a given ANNalgorithm.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleCreating Scalable Neural Networks with Maximal Fpga Resources
dc.contributor.committeeMemberLatino, Carl D
dc.contributor.committeeMemberStine, James E
osu.filenameWilliams_okstate_0664M_14037.pdf
osu.accesstypeOpen Access
dc.description.departmentElectrical Engineering
dc.type.genreThesis
dc.type.materialtext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record