location: Current position: Home >> Scientific Research >> Paper Publications

Scene Understanding and Semantic Mapping for Unmanned Ground Vehicles Using 3D Point Clouds

Hits:

Indexed by:会议论文

Date of Publication:2018-01-01

Included Journals:CPCI-S、EI

Page Number:341-347

Key Words:unmanned ground vehicles; scene understanding; semantic map; 3D point clouds

Abstract:The perception and understanding of the surrounding environment are the foundation of UGV navigation and mapping. This paper proposed a semantic mapping method for UGV in large-scale outdoor environment. The 3D laser point clouds are transformed into 2D optimal depth and vector length graph models. The ODVL images are divided into super pixels, and 20 dimensional texture features are extracted from each super pixel. Based on the texture features, the Gentle-AdaBoost algorithm is used to classify the super pixels to achieve scene understanding. According to result of scene understanding, the environments are divided into scene nodes and road nodes. The semantic map of the outdoor environment is obtained by generating topological relations between the scene nodes and the road nodes. Real semantic map for large-scale outdoor environment is built to verify the effectiveness and practicability of the proposed method.

Pre One:基于迁移学习的类别级物体识别与检测研究与进展

Next One:RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots