Loading...
Thumbnail Image

Date

2022-08

Journal Title

Journal ISSN

Volume Title

Publisher

Machine learning (ML) is becoming increasingly sought after in diverse domains. Unfortunately for this objective, most ML research has focused too much on improving performance on evaluation metrics such as accuracy to the exclusion of other qualities like interpretability. However, to make important decisions, ML models need to be interpretable. The goal of interpretable machine learning (IML) is to build models that are understandable to users. One approach to IML is to have meaning to each of its components. Thus, IML aids in building models that are trustworthy and improve fairness in artificial intelligence. In informed ML, prior knowledge is explicitly integrated into the ML pipeline/training process. Interactive ML enables ML models to be interactively steered by people and is more advantageous for the tasks where human knowledge is needed in the analysis process. In this work, we proposed the I3 framework that brings together the ideas of being informed, interactive, and interpretable. In this work we reintroduced, highlighted, and established the larger picture to one approach in the context of being informative, interactive, and interpretable. Pei et al.’s work is a strong candidate and is one instantiation of I3 framework. In this work, Pei et al.’s work is used to approximate the kinematics of a robotic arm using interpretable artificial neural networks (ANNs). Pei et al.’s work is developed using applied mathematics for engineering mechanics and is based on approximating nonlinear functions where domain knowledge and visually observable features of the data are used to design ANNs. Pei et al.’s work is informed as scientific knowledge through applied mathematics, engineering and world knowledge through vision are represented in the form of algebraic equations, logic rules, and human feedback. The represented knowledge helps to narrow down the hypotheses for network architecture. Pei et al.’s work involve integrating prior knowledge obtained by examining the dominant features of the data. Then the interactive process involves choosing an appropriate basis function from the visualization of the function to be approximated; this helps in designing the ANN architecture and its initial values. Interpretability is the result of being informed and interactive. After analyzing Pei et al.’s work, we present a feasibility study approximating the kinematics of a simplified robotic arm. We extend Pei et al.’s work and its use for a different application, noting the challenges that arise while extending this work to more inputs and to multiple hidden layers. This approach leads to training success, good generalization, and interpretability.

Description

Keywords

Interpretable Machine Learning, Informed Machine Learning, Explainable Artificial Intelligence, Interactive Machine Learning, Kinematics of Robot arms

Citation

DOI

Related file

Notes

Sponsorship