Creating Scalable Neural Networks with Maximal Fpga Resources
Abstract
Hardware implementations of Artificial Neural Network (ANN) architectures cantake advantage of parallelism in the ANN algorithm. Using automated procedures,arbitrary amounts of Field Gate Programmable Array (FPGA) resources can beallocated to calculate arbitrary ANN algorithms. This document analyzes trade-offsbetween the speed and area required as an arbitrary ANN algorithm is computedover arbitrary hardware sizes. Comparing the calculated number cycles and area asthe number of inputs to a neuron and the number of neurons in a layer vary, it isfound that there exists an optimal number of inputs and neurons for a given ANNalgorithm.
Collections
- OSU Theses [15752]