Evaluating GAM-Like Neural Network Architectures for Interpretable Machine Learning
Abstract
In many machine learning applications, interpretability is of the utmost importance. Artificial intelligence is proliferating, but before you entrust your finances, your well-being, or even your life to a machine, you’d really like to be sure that it knows what it’s doing.
As a human, the best way to evaluate an algorithm is to pick it apart, understand how it works, and figure out how it arrives at the decisions it does. Unfortunately, as machine learning techniques become more powerful and more complicated, reverse-engineering is becoming more difficult. Engineers often choose to implement a model that is accurate rather than one that understandable. In this work, we demonstrate a novel technique that, in certain circumstances, can be both.
This work introduces a novel neural network architecture that improves interpretability without sacrificing model accuracy. We test this architecture in a number of real-world classification datasets and demonstrate that it performs almost identically to state-of-the-art methods. We introduce Pandemic, a novel image classification benchmark, to demonstrate that our architecture has further applications in deep-learning models.
Collections
- OU - Theses [2217]
The following license files are associated with this item: