dc.contributor.advisor | Hagan, Martin T. | |
dc.contributor.author | Williams, Evan Ruck | |
dc.date.accessioned | 2017-02-22T22:13:17Z | |
dc.date.available | 2017-02-22T22:13:17Z | |
dc.date.issued | 2015-05-01 | |
dc.identifier.uri | https://hdl.handle.net/11244/49023 | |
dc.description.abstract | Hardware implementations of Artificial Neural Network (ANN) architectures cantake advantage of parallelism in the ANN algorithm. Using automated procedures,arbitrary amounts of Field Gate Programmable Array (FPGA) resources can beallocated to calculate arbitrary ANN algorithms. This document analyzes trade-offsbetween the speed and area required as an arbitrary ANN algorithm is computedover arbitrary hardware sizes. Comparing the calculated number cycles and area asthe number of inputs to a neuron and the number of neurons in a layer vary, it isfound that there exists an optimal number of inputs and neurons for a given ANNalgorithm. | |
dc.format | application/pdf | |
dc.language | en_US | |
dc.rights | Copyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material. | |
dc.title | Creating Scalable Neural Networks with Maximal Fpga Resources | |
dc.contributor.committeeMember | Latino, Carl D | |
dc.contributor.committeeMember | Stine, James E | |
osu.filename | Williams_okstate_0664M_14037.pdf | |
osu.accesstype | Open Access | |
dc.description.department | Electrical Engineering | |
dc.type.genre | Thesis | |
dc.type.material | text | |