Current position: Home >> Scientific Research >> Paper Publications

RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots

Release Time:2019-03-13  Hits:

Indexed by: Journal Article

Date of Publication: 2019-01-01

Journal: IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

Included Journals: Scopus、SCIE

Volume: 68

Issue: 1

Page Number: 27-37

ISSN: 0018-9456

Key Words: Full convolution neural network (FCN); mobile robots; multisensor data fusion; outdoor scene understanding; semantic segmentation

Abstract: This paper presents a multisensor-based approach to outdoor scene understanding of mobile robots. Since laser scanning points in 3-D space are distributed irregularly and unbalanced, a projection algorithm is proposed to generate RGB, depth, and intensity (RGB-DI) images so that the outdoor environments can be optimally measured with a variable resolution. The 3-D semantic segmentation in RGB-DI cloud points is, therefore, transformed to the semantic segmentation in RGB-DI images. A full convolution neural network (FCN) model with deep layers is designed to perform semantic segmentation of RGB-DI images. According to the exact correspondence between each 3-D point and each pixel in a RGB-DI image, the semantic segmentation results of the RGB-DI images are mapped back to the original point clouds to realize the 3-D scene understanding. The proposed algorithms are tested on different data sets, and the results show that our RGB-DI image and FCN modelbased approach can provide a superior performance for outdoor scene understanding. Moreover, real-world experiments were conducted on our mobile robot platform to show the validity and practicability of the proposed approach.

Prev One:Surrogate-Assisted Particle Swarm Optimization Algorithm With Pareto Active Learning for Expensive Multi-Objective Optimization

Next One:Variational Inference-Based Automatic Relevance Determination Kernel for Embedded Feature Selection of Noisy Industrial Data