Date
Journal Title
Journal ISSN
Volume Title
Publisher
The ultimate goal of this research is to build a novel, generalized, arbitrary-depth, neural controller that performs reward- and experience-based neuromodulatory learning, which is online, bootstrapping, interactive, incremental, and dynamic. Autonomous agents, such as robots, maybe able to adapt to uncertain environments if they use reward-based, interactive learning. Unfortunately, typical reward-based models are based on discrete state and action spaces whereas many interesting applications contain continuous spaces. This suggests the use of an artificial neural controller with continuous weights. Adapting the neuromodulatory features of biological brains into a robot controller plays an important role in building more biological robots; however, a biologically feasible learning model does not necessarily promote increased learning efficiency or optimizing the neural networks in a generalized way. For these reasons, this research introduces the Context-Aware Learning Model (CALM) and four different learning algorithms that operate within this model, all of which use logistic regression backpropagation and hyperbolic, reward-based learning. This research introduces a novel way of combining reward- and experience-based learning with an arbitrary-depth artificial neural network and shows how specific behavioral neurobiological features are applied in building a novel neuromodulatory learning mechanism. CALM is evaluated with five metrics on six synthetic data sets and shows promising performances.