Show simple item record

dc.contributor.authorThobbi, Anand Rajiv
dc.date.accessioned2014-04-17T20:09:19Z
dc.date.available2014-04-17T20:09:19Z
dc.date.issued2011-12-01
dc.identifier.urihttps://hdl.handle.net/11244/10283
dc.description.abstractThe aim of this work is to investigate the different ways in which humans can use their bodily movements for communicating and interacting with robots. The focus of this work is on observing, characterizing and predicting the human's arm and hand motion for human-robot communication (HRC). Such communication can be broadly classified as being explicit or implicit. In this work, the robot learning from demonstrations (LfD) problem is studied as an example of explicit human-robot communications. We consider 2 cases such as - (1) Robot learning to perform simple arm gestures : A framework based on Dynamic Time Warping (DTW) is proposed and implemented. Furthermore, we consider the case where the demonstrations contain missing data segments. (2) Robot learning to perform tasks : The commonly used GMM/GMR framework is studied and implemented. Implicit human-robot communication is prevalent when humans and robots work collaboratively. We consider two examples of collaborative tasks such as a - (1) Joint table-lifting task : For this task to be truly successful, the robot needs to be able to decide its leader/follower role autonomously and dynamically. A framework in which the robot utilizes subtle cues from human motion to achieve this is proposed and evaluated. For enabling proactive leader-like behavior for the robot, it is necessary to learn the human motion model and generate next-state actions based on predictions generating from it. (2) Co-operative cooking task: For proactively assisting the human, the robot should be able to infer his intent. We propose an approach in which the inference is based on long term predictions of the human's hand motion. The robot's actions are determined by the inferred human intent. The theoretical ideas presented in this work are validated on an experimental platform consisting mainly of the Nao humanoid robot and the Vicon motion capture system. Qualitative and quantitative results demonstrate the efficacy of using human motion for both implicit and explicit human-robot communication.
dc.formatapplication/pdf
dc.languageen_US
dc.publisherOklahoma State University
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleRole of Human Motion in Human-robot Interaction
dc.typetext
osu.filenameThobbi_okstate_0664M_11845.pdf
osu.collegeEngineering, Architecture, and Technology
osu.accesstypeOpen Access
dc.description.departmentSchool of Electrical & Computer Engineering
dc.type.genreThesis
dc.subject.keywordsrobotics


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record