HARNESSING DEEP REINFORCEMENT LEARNING: STUDIES IN ROBOTIC MANIPULATION, ENHANCED SEMANTIC SEGMENTATION, AND SECURING IMAGE CLASSIFIERS
dc.contributor.advisor | Cheng, Samuel | |
dc.contributor.author | Han, Dong | |
dc.contributor.committeeMember | Tang, Choon Yik | |
dc.contributor.committeeMember | MacDonald, Gregory | |
dc.contributor.committeeMember | Habibi, Golnaz | |
dc.date.accessioned | 2024-07-09T16:15:38Z | |
dc.date.available | 2024-07-09T16:15:38Z | |
dc.date.issued | 2024-08-16 | |
dc.date.manuscript | 2024-07 | |
dc.description.abstract | This dissertation investigates the transformative potential of Deep Reinforcement Learning (DRL) in three critical domains: robotic manipulation, enhanced semantic segmentation, and the security of image classifiers. Through comprehensive exploration and analysis, this research addresses the challenges and limitations inherent in current DRL methodologies, offering novel insights and practical solutions. In the domain of robotic manipulation, the study provides an in-depth examination of various DRL algorithms, including value-based, policy-based, and actor-critic methods. The findings highlight the specific strengths and limitations of each algorithm, guiding the selection of appropriate methods for diverse robotic applications. Additionally, the research proposes new directions for integrating multiple learning paradigms to enhance robotic adaptability and performance in complex environments. For enhanced semantic segmentation, the dissertation develops a robust framework utilizing reinforced active learning methodologies. By integrating advanced techniques such as Dueling Deep Q-Networks (Dueling DQN), Prioritized Experience Replay, Noisy Networks, and Emphasizing Recent Experience, the framework addresses imbalanced datasets and optimizes annotation processes. Experimental results demonstrate the framework's robustness and efficiency across various domains, particularly under constrained annotation budgets. In securing image classifiers, the research focuses on developing surrogate models capable of replicating proprietary image classification models under stringent constraints. An open-source framework integrating popular DQN extensions is introduced, demonstrating their effectiveness in enhancing attack methodologies. The evaluation of synthetic data generation techniques identifies best practices for training robust adversarial models, advancing the understanding of effective attack strategies in AI security. This dissertation underscores the importance of improving sample efficiency, stability, generalization, and robustness in DRL algorithms. Ethical and practical considerations are addressed, ensuring the minimization of risks associated with model extraction attacks. The practical implications extend to various fields, including autonomous vehicles, robotics, and AI security, providing actionable insights for deploying DRL technologies. The research paves the way for future work in integrating multi-paradigm learning, expanding evaluation frameworks, developing robust defense mechanisms, and leveraging advanced data generation techniques. The findings reinforce the transformative potential of DRL, shaping its future applications and ensuring its role as a critical tool in tackling complex decision-making tasks across diverse fields. | en_US |
dc.identifier.uri | https://hdl.handle.net/11244/340467 | |
dc.language | en_US | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | * |
dc.subject | Reinforcement learning | en_US |
dc.subject | Model extraction attack | en_US |
dc.subject | Semantic segmentation | en_US |
dc.thesis.degree | Ph.D. | en_US |
dc.title | HARNESSING DEEP REINFORCEMENT LEARNING: STUDIES IN ROBOTIC MANIPULATION, ENHANCED SEMANTIC SEGMENTATION, AND SECURING IMAGE CLASSIFIERS | en_US |
ou.group | Gallogly College of Engineering::School of Electrical and Computer Engineering | en_US |
shareok.nativefileaccess | restricted | en_US |
shareok.orcid | 0009-0004-9530-7570 | en_US |