Pose Estimation and Map Building with a PMD-Camera for Robot Navigation

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, 2007

pdficon_largeIn this paper we describe a joint approach for robot navigation with collision avoidance, pose estimation and map building with a 2.5D PMD (Photonic Mixer Device)-Camera combined with a high-resolution spherical camera. The cameras are mounted at the front of the robot with a certain inclination angle. The navigation and map building consists of two steps: When entering new terrain the robot first scans the surrounding. Simultaneously a 3D-panorama is generated from the PMD-images. In the second step the robot moves along the predefined path, using the PMD-camera for collision avoidance and a combined Structure-from-Motion (SfM) and model-tracking approach for self-localization. The computed poses of the robot are simultaneously used for map building with new measurements from the PMD-camera.

Calibration of a PMD camera using a planar calibration object together with a multi-camera setup

Ingo Schiller, Christian Beder and Reinhard Koch, 2008

pdficon_largeWe discuss the joint calibration of novel 3D range cameras based on the time-of-flight principle with the Photonic Mixing Device (PMD) and standard 2D CCD cameras. Due to the small field-of-view (fov) and low pixel resolution, PMD-cameras are difficult to calibrate with traditional calibration methods. In addition, the 3D range data contains systematic errors that need to be compensated. Therefore, a calibration method is developed that can estimate full intrinsic calibration of the PMD-camera including optical lens distortions and systematic range errors, and is able to calibrate the external orientation together with multiple 2D cameras that are rigidly coupled to the PMD-camera. The calibration approach is based on a planar checkerboard pattern as calibration reference, viewed from multiple angles and distances. By combining the PMD-camera with standard CCD-cameras the internal camera parameters can be estimated more precisely and the limitations of the small fov can be overcome. Furthermore we use the additional cameras to calibrate the systematic depth measurement error of the PMD-camera. We show that the correlation between rotation and translation estimation is significantly reduced with our method.

Real-Time Estimation of the Camera Path from a Sequence of Intrinsically Calibrated PMD Depth Images

Christian Beder and Ingo Schiller and Reinhard Koch, 2008

pdficon_largeIn recent years real time active 3D range cameras based on time-of-flight technology using the Photonic-Mixer-Device (PMD) have been developed. Those cameras produce sequences of low-resolution depth images at frame rates comparable to regular video cameras. Hence, spatial resolution is traded against temporal resolution compared to standard laser scanning techniques. In this work an algorithm is proposed, which allows to reconstruct the camera path of a moving PMD depth camera. A constraint describing the relative orientation between two calibrated PMD depth images will be derived. It will be shown, how this constraint can be used to efficiently estimate a camera trajectory from a sequence of depth images in real-time. The estimation of the trajectory of the PMD depth camera allows to integrate the depth measurements over a long sequence taken from a moving platform. This increases the spatial resolution and enables interactive scanning objects with a PMD camera in order to obtain a dense 3D point cloud.

A Combined Approach for Estimating Patchlets from PMD Depth Images and Stereo Intensity Images

Christian Beder, Bogumil Bartczak and Reinhard Koch, 2007

pdficon_largeReal-time active 3D range cameras based on time-of-flight technology using the Photonic Mixer Device (PMD) can be considered as a complementary technique for stereo-vision based depth estimation. Since those systems directly yield 3D measurements, they can also be used for initializing vision based approaches, especially in highly dynamic environments. Fusion of PMD depth images with passive intensity-based stereo is a promising approach for obtaining reliable surface reconstructions even in weakly textured surface regions. In this work a PMD-stereo fusion algorithm for the estimation of patchlets from a combined PMD-stereo camera rig will be presented. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how those two approaches can be fused into one single estimation, that yields results even if either of the two single approaches fails.

Calibration of focal length and 3d pose based on the reflectance and depth image of a planar object

Christian Beder and Reinhard Koch

pdficon_largeThe estimation of position and orientation of a PMD camera in a global reference frame is required by many measurement applications based on such systems. PMD cameras produce a depth as well as a reflectance image of low resolution compared to standard optical cameras, so that calibration of the cameras based on the reflecance image alone is difficult. We will present a novel approach for calibrating the focal length and 3d pose of a PMD camera based on the depth and reflectance image of a planar checkerboard pattern. By integrating both sources of information higher accuracies can be achieved. Furthermore, one single image is sufficient for calibrating the focal length as well as the 3d pose from a planar reference object. This is because the depth measurements are orthogonal to the lateral intensity measurements and provide direct metric information.

A Comparison of PMD-Cameras and Stereo-Vision for the Task of Surface Reconstruction using Patchlets

Christian Beder, Bogumil Bartczak and Reinhard Koch

pdficon_large Recently real-time active 3D range cameras based on time-of-flight technology (PMD) have become available. Those cameras can be considered as a competing technique
for stereo-vision based surface reconstruction. Since those systems directly yield accurate 3d measurements, they can be used for benchmarking vision based approaches, especially in highly dynamic environments. Therefore, a comparative study of the two approaches is relevant. In this work the achievable accuracy of the two techniques, PMD and  stereo, is compared on the basis of patchlet estimation. As patchlet we define an oriented small planar 3d patch with associated surface normal. Leastsquares estimation schemes for  estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how the achivable accuracy can be estimated for both systems. Experiments  under optimal conditions for both systems are performed and the achievable accuracies are compared. It has been found that the PMD system outperformed the stereo system in terms  of achievable accuracy for distance measurements, while the estimation of normal direction is comparable for both systems.

Calibration of the Intensity-Related Distance Error of the PMD TOF-Camera

Marvin Lindner and Andreas Kolb

pdficon_largeA growing number of modern applications such as position determination, online object recognition and collision prevention depend on accurate scene analysis. A low-cost and fast alternative to standard techniques like laser scanners or stereo vision is the distance measurement with modulated, coherent infrared light based on the Photo Mixing Device (PMD) technique. This paper describes an enhanced calibration approach for PMD-based distance sensors, for which highly accurate calibration techniques have not been widely investigated yet. Compared to other known methods, our approach incorporates additional deviation errors related with the active illumination incident to the sensor pixels. The resulting calibration yields significantly more precise distance information. Furthermore, we present a simple to use, vision-based approach for the acquisition of the reference data required by any distance calibration scheme, yielding a light-weighted, on-site calibration system with little expenditure in terms of equipment.

Extrinsic and Depth Calibration of ToF-cameras

Stefan Fuchs and Gerd Hirzinger

pdficon_largeRecently, ToF-cameras have attracted attention because of their ability to generate a full 2 1/2 D depth image at video frame rate. Thus, ToF-cameras are suitable for real-time 3D tasks such as tracking, visual servoing or object pose estimation. The usability of such systems mainly depends on an accurate camera calibration. In this work a calibration process for ToF-cameras with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera to a robot’s endeffector is described. The calibration process is not only based on the monochromatic images of the camera but also uses its depth values that are generated from a chequer-board pattern. The robustness and accuracy of the presented method is assessed applying it to randomly selected shots and comparing the calibrated measurements to a ground truth obtained from a laser scanner.

Calibration and Registration for Precise Surface Reconstruction with TOF Cameras

Stefan Fuchs and Stefan May

pdficon_largeThis paper presents a method for precise surface reconstruction with time-of-flight (TOF) cameras. A novel calibration approach which simplifies the calibration task and doubles the camera’s precision is developed and compared to current calibration methods. Remaining errors are tackled by applying filter and error distributing methods. Thus, a reference object is circumferentially reconstructed with an overall mean precision of approximately 3mm in translation and 3 deg in rotation. The resulting model serves as quantification of achievable
reconstruction precision with TOF cameras. This is a major criteria for the potential analysis of this sensor technology, that is firstly demonstrated within this work.

Robust Edge Extraction for Swissranger SR-3000 Range Images

Cang Ye and GuruPrasad M. Hegde

pdficon_largeThis paper presents a new method for extracting object edges from range images obtained by a 3D range imaging sensor⎯the SwissRanger SR-3000. In range image preprocessing  tage, the method enhances object edges by using surface normal information; and it employs the Hough Transform to detect straight line features in the Normal-Enhanced Range Image  NERI). Due to the noise in the sensor’s range data, a NERI contains corrupted object surfaces that may result in unwanted edges and greatly encumber the extraction of linear features.  To alleviate this problem, a Singular Value Decomposition (SVD) filter is developed to smooth object surfaces. The efficacy of the edge extraction method is validated by experiments in  various environments.