Obstacle Detection using a TOF Range Camera for Indoor AGV Navigation

T. Hong, R. Bostelman, and R. Madhavan, 2004

pdficon_largeThe performance evaluation of an obstacle detection and segmentation algorithm for Automated Guided Vehicle (AGV) navigation in factory-like environments using a 3D real-time range camera is the subject of this paper 1. Our approach has been tested successfully on British safety standard recommended object sizes and materials placed on the vehicle path. The segmented (mapped) obstacles are then verified using absolute measurements obtained using a relatively accurate 2D scanning laser rangefinder.

Advertisements

Graffiti Detection Using a Time-Of-Flight Camera

Federico Tombari, Luigi Di Stefano, Stefano Mattoccia, and Andrea Zanetti, 2008

pdficon_largeTime-of-Flight (TOF) cameras relate to a very recent and growing technology which has already proved to be useful for computer vision tasks. In this paper we investigate on the use of a TOF camera to perform video-based graffiti detection, which can be thought of as a monitoring system able to detect acts of vandalism such as dirtying, etching and defacing walls and objects surfaces. Experimental results show promising capabilities of the proposed approach, with improvements expected as the technology gets more mature.

Visual Tracking Using Color Cameras and Time-of-Flight Range Imaging Sensors

Leila Sabeti, Ehsan Parvizi, Q.M. Jonathan Wu, 2008

pdficon_largeThis work proposes two particle filter-based visual trackers — one using output images from a color camera and the other using images from a time-of-flight range imaging sensor. These proposed trackers were compared in order to identify the advantages and drawbacks of utilizing output images from the color camera as opposed to output from the time-of-flight range imaging sensor for the most efficient visual tracking. This paper is also unique in its novel mixture of efficient methods to produce two stable and reliable human trackers using the two cameras.

Pose Estimation and Map Building with a PMD-Camera for Robot Navigation

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, 2007

pdficon_largeIn this paper we describe a joint approach for robot navigation with collision avoidance, pose estimation and map building with a 2.5D PMD (Photonic Mixer Device)-Camera combined with a high-resolution spherical camera. The cameras are mounted at the front of the robot with a certain inclination angle. The navigation and map building consists of two steps: When entering new terrain the robot first scans the surrounding. Simultaneously a 3D-panorama is generated from the PMD-images. In the second step the robot moves along the predefined path, using the PMD-camera for collision avoidance and a combined Structure-from-Motion (SfM) and model-tracking approach for self-localization. The computed poses of the robot are simultaneously used for map building with new measurements from the PMD-camera.

Calibration of a PMD camera using a planar calibration object together with a multi-camera setup

Ingo Schiller, Christian Beder and Reinhard Koch, 2008

pdficon_largeWe discuss the joint calibration of novel 3D range cameras based on the time-of-flight principle with the Photonic Mixing Device (PMD) and standard 2D CCD cameras. Due to the small field-of-view (fov) and low pixel resolution, PMD-cameras are difficult to calibrate with traditional calibration methods. In addition, the 3D range data contains systematic errors that need to be compensated. Therefore, a calibration method is developed that can estimate full intrinsic calibration of the PMD-camera including optical lens distortions and systematic range errors, and is able to calibrate the external orientation together with multiple 2D cameras that are rigidly coupled to the PMD-camera. The calibration approach is based on a planar checkerboard pattern as calibration reference, viewed from multiple angles and distances. By combining the PMD-camera with standard CCD-cameras the internal camera parameters can be estimated more precisely and the limitations of the small fov can be overcome. Furthermore we use the additional cameras to calibrate the systematic depth measurement error of the PMD-camera. We show that the correlation between rotation and translation estimation is significantly reduced with our method.

Real-Time Estimation of the Camera Path from a Sequence of Intrinsically Calibrated PMD Depth Images

Christian Beder and Ingo Schiller and Reinhard Koch, 2008

pdficon_largeIn recent years real time active 3D range cameras based on time-of-flight technology using the Photonic-Mixer-Device (PMD) have been developed. Those cameras produce sequences of low-resolution depth images at frame rates comparable to regular video cameras. Hence, spatial resolution is traded against temporal resolution compared to standard laser scanning techniques. In this work an algorithm is proposed, which allows to reconstruct the camera path of a moving PMD depth camera. A constraint describing the relative orientation between two calibrated PMD depth images will be derived. It will be shown, how this constraint can be used to efficiently estimate a camera trajectory from a sequence of depth images in real-time. The estimation of the trajectory of the PMD depth camera allows to integrate the depth measurements over a long sequence taken from a moving platform. This increases the spatial resolution and enables interactive scanning objects with a PMD camera in order to obtain a dense 3D point cloud.

A Combined Approach for Estimating Patchlets from PMD Depth Images and Stereo Intensity Images

Christian Beder, Bogumil Bartczak and Reinhard Koch, 2007

pdficon_largeReal-time active 3D range cameras based on time-of-flight technology using the Photonic Mixer Device (PMD) can be considered as a complementary technique for stereo-vision based depth estimation. Since those systems directly yield 3D measurements, they can also be used for initializing vision based approaches, especially in highly dynamic environments. Fusion of PMD depth images with passive intensity-based stereo is a promising approach for obtaining reliable surface reconstructions even in weakly textured surface regions. In this work a PMD-stereo fusion algorithm for the estimation of patchlets from a combined PMD-stereo camera rig will be presented. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how those two approaches can be fused into one single estimation, that yields results even if either of the two single approaches fails.