Evaluating GAM-Like Neural Network Architectures for Interpretable Machine Learning
dc.contributor.advisor | McGovern, Amy | |
dc.contributor.author | Booker, William | |
dc.contributor.committeeMember | Hougen, Dean | |
dc.contributor.committeeMember | Grant, Christan | |
dc.date.accessioned | 2019-05-10T18:30:40Z | |
dc.date.available | 2019-05-10T18:30:40Z | |
dc.date.issued | 2019-05 | |
dc.date.manuscript | 2019-05 | |
dc.description.abstract | In many machine learning applications, interpretability is of the utmost importance. Artificial intelligence is proliferating, but before you entrust your finances, your well-being, or even your life to a machine, you’d really like to be sure that it knows what it’s doing. As a human, the best way to evaluate an algorithm is to pick it apart, understand how it works, and figure out how it arrives at the decisions it does. Unfortunately, as machine learning techniques become more powerful and more complicated, reverse-engineering is becoming more difficult. Engineers often choose to implement a model that is accurate rather than one that understandable. In this work, we demonstrate a novel technique that, in certain circumstances, can be both. This work introduces a novel neural network architecture that improves interpretability without sacrificing model accuracy. We test this architecture in a number of real-world classification datasets and demonstrate that it performs almost identically to state-of-the-art methods. We introduce Pandemic, a novel image classification benchmark, to demonstrate that our architecture has further applications in deep-learning models. | en_US |
dc.identifier.uri | https://hdl.handle.net/11244/319701 | |
dc.language | en_US | en_US |
dc.rights | Attribution 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | GAM | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Interpretability | en_US |
dc.subject | Neural Networks | en_US |
dc.thesis.degree | Master of Science | en_US |
dc.title | Evaluating GAM-Like Neural Network Architectures for Interpretable Machine Learning | en_US |
ou.group | Gallogly College of Engineering::School of Computer Science | en_US |
Files
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: