Laser-based Navigation Enhanced with 3D Time-of-Flight Data

Fang Yuan, Agnes Swadzba, Roland Philippsen, Orhan Engin, Marc Hanheide, and Sven Wachsmuth

Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to also penetrate highly dynamic and populated spaces, such as people’s home, and move around smoothly. However, in an unconstrained environment the twodimensional perceptual space of a fixed mounted laser is not sufficient in order to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Time-of-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a “virtual laser”. For the originally solely laser-based motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan.

3D Head Tracking Based on Recognition and Interpolation Using a Time-of-Flight Depth Sensor

Salih Burak Göktürk and Carlo Tomasi, 2004

pdficon_largeThis paper describes a head-tracking algorithm that is based on recognition and correlation-based weighted interpolation. The input is a sequence of 3D depth images generated by a novel time-of-flight depth sensor. These are processed to segment the background and foreground, and the latter is used as the input to the head tracking algorithm, which is composed of three major modules: First, a depth signature is created out of the depth images. Next, the signature is compared against signatures that are collected in a training set of depth images. Finally, a correlation metric is calculated between most possible signature hits. The head location is calculated by interpolating among stored depth values, using the correlation metrics as the weights. This combination of depth sensing and recognition-based head tracking provides more than 90 percent success. Even if the track is temporarily lost, it is easily recovered when a good match is obtained from the training set. The use of depth images and recognition-based head tracking achieves robust real-time tracking results under extreme conditions such as 180-degree rotation, temporary occlusions, and complex

A Unified Approach to Calibrate a Network of Camcorders and ToF cameras

Li Guan Marc Pollefeys, 2008

pdficon_largeIn this paper, we propose a unified calibration technique for a heterogeneous sensor network of video camcorders and Time-of-Flight (ToF) cameras. By moving a spherical calibration target around the commonly observed scene, we can robustly and conveniently extract the sphere centers in the observed images and recover the geometric extrinsics for both types of sensors. The approach is then evaluated with a real dataset of two HD camcorders and two ToF cameras, and 3D shapes are reconstructed from this calibrated system. The main contributions are: (1) We reveal the fact that the frontmost sphere surface point to the ToF camera center is always highlighted, and use this idea to extract sphere centers in the ToF camera images; (2) We propose a unified calibration scheme in spite of the heterogeneity of the sensors. After the calibration, this multi-modal sensor network thus becomes powerful to generate high-quality 3D shapes efficiently.

A Virtual Keyboard Based on True-3D Optical Ranging

Huan Du, Thierry Oggier, Felix Lustenberger, Edoardo Charbon, 2005

pdficon_largeIn this paper, a complete system is presented which mimics a QWERTY keyboard on an arbitrary surface. The system consists of a pattern projector and a true-3D range camera for detecting the typing events. We exploit depth information acquired with the 3D range camera and detect the hand region using a pre-computed reference frame. The fingertips are found by analyzing the hands’ contour and fitting the depth curve with different feature models. To detect a keystroke, we analyze the feature of the depth curve and map it back to a global coordinate system to find which key was pressed. These steps are fully automated and do not require human intervention. The system can be used in any application requiring zero form factor and minimized or no contact with a medium, as in a large number of cases in human-to-computer interaction, virtual reality, game control, 3D designs, etc.

Real-Time, Three-Dimensional Object Detection and Modeling in Construction

Jochen Teizer, Frederic Bosche, Carlos H. Caldas, Carl T. Haas, and Katherine A. Liapi, 2005

pdficon_largeThis paper describes a research effort directed to produce methods to model three-dimensional scenes of construction field objects in real-time that adds valuable data to construction information management systems, as well as equipment navigation systems. For efficiency reasons, typical construction objects are modeled by bounding surfaces using a high-frame rate range sensor, called Flash LADAR. The sensor provides a dense cloud of range points which are segmented and grouped into objects. Algorithms are being developed to accurately detect these objects and model characteristics such as volume, speed, and direction. Initial experiments show the feasibility of this method. The advantages and limitations, and potential solutions to limitations are summarized in this paper.

Online environment reconstruction for biped navigation

Philipp Michel, Joel Chestnutt, Satoshi Kagami, Koichi Nishiwaki, James Kuffner and Takeo Kanade, 2006

pdficon_largeAs navigation autonomy becomes an increasingly important research topic for biped humanoid robots, efficient approaches to perception and mapping that are suited to the unique characteristics of humanoids and their typical operating environments will be required. This paper presents a system for online environment reconstruction that utilizes both external sensors for global localization, and on-body sensors for detailed local mapping. An external optical motion capture system is used to accurately localize on-board sensors that integrate successive 2D views of a calibrated camera and range measurements from a SwissRanger SR-2 time-of-flight sensor to construct global environment maps in real-time. Environment obstacle geometry is encoded in 2D occupancy grids and 2.5D height maps for navigation planning. We present an on-body implementation for the HRP-2 humanoid robot that, combined with a footstep planner, enables the robot to autonomously traverse dynamic environments containing unpredictably moving obstacles.

3D Object Reconstruction with Heterogeneous Sensor Data

Li Guan, Jean-Sebastien Franco, Marc Pollefeys, 2008

pdficon_largeIn this paper, we reconstruct 3D objects with a heterogeneous sensor network of Time of Flight (ToF) Range Imaging (RIM) sensors and high-res camcorders. With this setup, we first carry out a simple but effective depth calibration for the RIM cameras. We then combine the camcorder silhouette cues and RIM camera depth information, for the reconstruction. Our main contribution is the proposal of a sensor fusion framework so that the computation is general, simple and scalable. Although we only discuss the fusion of conventional cameras and RIM cameras in this paper, the proposed framework can be applied to any vision sensors. This framework uses a space occupancy grid as a probabilistic 3D representation of scene contents. After defining sensing models for each type of sensors, the reconstruction simply is a Bayesian inference problem, and can be solved robustly. The experiments show that the quality of the reconstruction is substantially improved from the noisy depth sensor measurement.

Obstacle Detection using a TOF Range Camera for Indoor AGV Navigation

T. Hong, R. Bostelman, and R. Madhavan, 2004

pdficon_largeThe performance evaluation of an obstacle detection and segmentation algorithm for Automated Guided Vehicle (AGV) navigation in factory-like environments using a 3D real-time range camera is the subject of this paper 1. Our approach has been tested successfully on British safety standard recommended object sizes and materials placed on the vehicle path. The segmented (mapped) obstacles are then verified using absolute measurements obtained using a relatively accurate 2D scanning laser rangefinder.

Graffiti Detection Using a Time-Of-Flight Camera

Federico Tombari, Luigi Di Stefano, Stefano Mattoccia, and Andrea Zanetti, 2008

pdficon_largeTime-of-Flight (TOF) cameras relate to a very recent and growing technology which has already proved to be useful for computer vision tasks. In this paper we investigate on the use of a TOF camera to perform video-based graffiti detection, which can be thought of as a monitoring system able to detect acts of vandalism such as dirtying, etching and defacing walls and objects surfaces. Experimental results show promising capabilities of the proposed approach, with improvements expected as the technology gets more mature.

Visual Tracking Using Color Cameras and Time-of-Flight Range Imaging Sensors

Leila Sabeti, Ehsan Parvizi, Q.M. Jonathan Wu, 2008

pdficon_largeThis work proposes two particle filter-based visual trackers — one using output images from a color camera and the other using images from a time-of-flight range imaging sensor. These proposed trackers were compared in order to identify the advantages and drawbacks of utilizing output images from the color camera as opposed to output from the time-of-flight range imaging sensor for the most efficient visual tracking. This paper is also unique in its novel mixture of efficient methods to produce two stable and reliable human trackers using the two cameras.