Show simple item record

dc.contributor.advisorKamalapurkar, Rushikesh
dc.contributor.authorMahmud, S. M. Nahid
dc.date.accessioned2021-09-24T13:58:06Z
dc.date.available2021-09-24T13:58:06Z
dc.date.issued2021-05
dc.identifier.urihttps://hdl.handle.net/11244/330931
dc.description.abstractThe ability to learn and execute optimal control policies safely is critical to the realization of complex autonomy, especially where task restarts are not available and/or when the systems are safety-critical. Safety requirements are often expressed in terms of state and/or control constraints. Methods such as barrier transformation and control barrier functions have been successfully used for safe learning in systems under state constraints and/or control constraints, in conjunction with model-based reinforcement learning to learn the optimal control policy. However, existing barrier-based safe learning methods rely on fully known models and full state feedback. In this thesis, two different safe model-based reinforcement learning techniques are developed. One of the techniques utilizes a novel filtered concurrent learning method to realize simultaneous learning and control in the presence of model uncertainties for safety-critical systems, and the other technique utilizes a novel dynamic state estimator to realize simultaneous learning and control for safety-critical systems with a partially observable state. The applicability of the developed techniques is demonstrated through simulations, and to illustrate their effectiveness, comparative simulations are presented wherever alternate methods exist to solve the problem under consideration. The thesis concludes with a discussion about the limitations of the developed techniques. Extensions of the developed techniques are also proposed along with the possible approaches to achieve them.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleSafety-aware model-based reinforcement learning using barrier transformation
dc.contributor.committeeMemberYen, Gary
dc.contributor.committeeMemberBai, He
osu.filenameMahmud_okstate_0664M_17192.pdf
osu.accesstypeOpen Access
dc.type.genreThesis
dc.type.materialText
dc.subject.keywordsbarrier transformation
dc.subject.keywordsmodel-based reinforcement learning
dc.subject.keywordsoptimal control
dc.subject.keywordssafety-critical systems
thesis.degree.disciplineMechanical and Aerospace Engineering
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record