SSCNav:
Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation
This paper focuses on visual semantic navigation, the task of producing actions for an active agent to navigate to a specified target object category in an unknown environment. To complete this task, the algorithm should simultaneously locate and navigate to an instance of the category. In comparison to the traditional point goal navigation, this task requires the agent to have a stronger contextual prior of indoor environments. We introduce SSCNav, an algorithm that explicitly models scene priors using a confidence-aware semantic scene completion module to complete the scene and guide the agent's navigation planning. Given a partial observation of the environment, SSCNav first infers a complete scene representation with semantic labels for the unobserved scene together with a confidence map associated with its own prediction. Then, a policy network infers the action from the scene completion result and confidence map. Our experiments demonstrate that the proposed scene completion module improves the efficiency of the downstream navigation policies.
Paper & Code & Data
Latest version: arXiv:2012.04512 [cs.CV]
Code and data can be found: here
Team
Bibtex
@inproceedings{liang2021sscnav,
title={SSCNav: Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation},
author={Liang, Yiqing and Chen, Boyuan and Song, Shuran},
booktitle = {Proc. of The International Conference in Robotics and Automation (ICRA)},
year={2021}
}
Technical Summary Video
Acknowledgements
This work was supported in part by the Amazon Research Award and Columbia School of Engineering.
Contact
If you have any questions, please feel free to contact Yiqing Liang