Show simple item record

dc.contributor.advisorCommuri, Sesh
dc.contributor.authorSoni, Ravi
dc.date.accessioned2017-08-07T20:44:03Z
dc.date.available2017-08-07T20:44:03Z
dc.date.issued2017
dc.identifier.urihttps://hdl.handle.net/11244/51898
dc.description.abstractOver the past several decades, robots have been used extensively in environments that pose high risk to human operators and in jobs that are repetitive and monotonous. In recent years, robot autonomy has been exploited to extend their use in several non-trivial tasks such as space exploration, underwater exploration, and investigating hazardous environments. Such tasks require robots to function in unstructured environments that can change dynamically. Successful use of robots in these tasks requires them to be able to determine their precise location, obtain maps and other information about their environment, navigate autonomously, and operate intelligently in the unknown environment. The process of determining the location of the robot and generating a map of its environment has been termed in the literature as Simultaneous Localization and Mapping (SLAM). Light Detection and Ranging (LiDAR), Sound Navigation and Ranging (SONAR) sensors, and depth cameras are typically used to generate a representation of the environment during the SLAM process. However, the real-time localization and generation of map information are still challenging tasks. Therefore, there is a need for techniques to speed up the approximate localization and mapping process while using fewer computational resources. This thesis presents an alternative method based on deep learning and computer vision algorithms for generating approximate localization information for mobile robots. This approach has been investigated to obtain approximate localization information captured by monocular cameras. Approximate localization can subsequently be used to develop coarse maps where a priori information is not available. Experiments were conducted to verify the ability of the proposed technique to determine the approximate location of the robot. The approximate location of the robot was qualitatively denoted in terms of its location in a building, a floor of the building, and interior corridors. ArUco markers were used to determine the quantitative location of the robot. The use of this approximate location of the robot in determining the location of key features in the vicinity of the robot was also studied. The results of the research reported in this thesis demonstrate that low cost, low resolution techniques can be used in conjunction with deep learning techniques to obtain approximate localization of an autonomous robot. Further such approximate information can be used to determine coarse position information of key features in the vicinity. It is anticipated that this approach can be subsequently extended to develop low-resolution maps of the environment that are suitable for autonomous navigation of robots.en_US
dc.languageen_USen_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectEngineering, Robotics.en_US
dc.subjectComputer Science.en_US
dc.titleIndoor Localization and Mapping Using Deep Learning Networksen_US
dc.contributor.committeeMemberFagg, Andrew
dc.contributor.committeeMemberTang, Choon Yik
dc.date.manuscript2017-08-01
dc.thesis.degreeMaster of Scienceen_US
ou.groupCollege of Engineering::School of Electrical and Computer Engineeringen_US
shareok.nativefileaccessrestricteden_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record