Nonuniqueness and equivalence in online inverse reinforcement learning with applications to pilot performance modeling
Abstract
The focus of this thesis is behavior modeling for pilots of unmanned aerial vehicles.The pilot is assumed to make decisions that optimize an unknown cost functional, which is estimated from observed trajectories using a novel inverse reinforcement learning (IRL) framework. The resulting IRL problem often admits multiple solutions. Nonuniqueness necessitates the study of the notion of equivalent solutions, i.e., solutions that result in a different cost function but same feedback matrix, and convergence to such solutions. While offline algorithms that result in convergence to equivalent solutions have been developed in the literature, online, real-time techniques that address nonuniqueness are not available. In this thesis, a regularized history stack observer that converges to approximately equivalent solutions of the IRL problem is developed. Novel data-richness conditions are developed to facilitate the analysis and simulation results are provided to demonstrate the effectiveness of the developed technique. The novel IRL observer is then adapted to the pilot modeling problem. The observer is shown to converge to one of the equivalent solutions of the IRL problem. The developed technique is implemented on a quadcopter where the pilot is modeled as a linear quadratic regulator. Experimental results demonstrate the robustness of the method and its ability to learn an equivalent cost functional.
Collections
- OSU Theses [15752]