Converting Depth Images and Point Clouds for Feature-based Pose Estimation

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

Converting Depth Images and Point Clouds for Feature-based Pose Estimation

Authors

Robert Lösch Technical University Bergakademie Freiberg, Germany, Mark Sastuba German Centre for Rail Traffic Research at the Federal Railway Authority, Germany, Jonas Toth Technical University Bergakademie Freiberg, Germany, Bernhard Jung Technical University Bergakademie Freiberg, Germany

Abstract

In recent years, depth sensors have become more and more affordable and have found their way into a growing amount of robotic systems. However, mono- or multi-modal sensor registration, often a necessary step for further processing, faces many challenges on raw depth images or point clouds. This paper presents a method of converting depth data into images capable of visualizing spatial details that are basically hidden in traditional depth images. After noise removal, a neighborhood of points forms two normal vectors whose difference is encoded into this new conversion. Compared to Bearing Angle images, our method yields brighter, higher-contrast images with more visible contours and more details. We tested feature-based pose estimation of both conversions in a visual odometry task and RGB-D SLAM. For all tested features, AKAZE, ORB, SIFT, and SURF, our new Flexion images yield better results than Bearing Angle images and show great potential to bridge the gap between depth data and classical computer vision. Source code is available here: https://rlsch.github.io/depth-flexion-conversion.

Follow Us on

0 comments

Add comment