location: Current position: Home >> Scientific Research >> Paper Publications

RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots

Hits:

Indexed by:期刊论文

Date of Publication:2019-01-01

Journal:IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

Included Journals:SCIE、Scopus

Volume:68

Issue:1

Page Number:27-37

ISSN No.:0018-9456

Key Words:Full convolution neural network (FCN); mobile robots; multisensor data fusion; outdoor scene understanding; semantic segmentation

Abstract:This paper presents a multisensor-based approach to outdoor scene understanding of mobile robots. Since laser scanning points in 3-D space are distributed irregularly and unbalanced, a projection algorithm is proposed to generate RGB, depth, and intensity (RGB-DI) images so that the outdoor environments can be optimally measured with a variable resolution. The 3-D semantic segmentation in RGB-DI cloud points is, therefore, transformed to the semantic segmentation in RGB-DI images. A full convolution neural network (FCN) model with deep layers is designed to perform semantic segmentation of RGB-DI images. According to the exact correspondence between each 3-D point and each pixel in a RGB-DI image, the semantic segmentation results of the RGB-DI images are mapped back to the original point clouds to realize the 3-D scene understanding. The proposed algorithms are tested on different data sets, and the results show that our RGB-DI image and FCN modelbased approach can provide a superior performance for outdoor scene understanding. Moreover, real-world experiments were conducted on our mobile robot platform to show the validity and practicability of the proposed approach.

Pre One:Automatic Generation of Synthetic LiDAR Point Clouds for 3-D Data Analysis

Next One:Online Feature Transformation Learning for Cross-Domain Object Category Recognition