Indoor Scene Reconstruction Using the Manhattan Assumption
Abstract
Robots are being developed to be co-inhabitants to help the elderly people in an assisted environment. A semantic map can provide robots a lot of information in the environment they cohabit with people. So far, most mapping algorithms have been limited to build maps only based on visible points without much consideration on the occluded parts. This research is two-fold. First, it aims to develop a complete map to help robots gain a deeper insight of the house. The second goal is to reconstruct scenes by mimicking people�s indoor understanding. Based on the Manhattan assumption, we propose a technique that separates an indoor scene into major structures and indoor objects. The room structures are reconstructed with ideal planes to render each side of the room. The unseen regions of major structures and objects are generated by extending visible planes. Our system is applied to an artificial kitchen scene and a typical living-room scene. The results show that the generated maps are more complete and semantically meaningful than the ones created by traditional data-driven approaches. Our algorithm has great potential to improve robots� efficiency by accurately locating itself in a cluttered scene and finding useful objects.
Collections
- OSU Theses [15752]