A Unified Approach to Calibrate a Network of Camcorders and ToF cameras

Li Guan Marc Pollefeys, 2008

pdficon_largeIn this paper, we propose a unified calibration technique for a heterogeneous sensor network of video camcorders and Time-of-Flight (ToF) cameras. By moving a spherical calibration target around the commonly observed scene, we can robustly and conveniently extract the sphere centers in the observed images and recover the geometric extrinsics for both types of sensors. The approach is then evaluated with a real dataset of two HD camcorders and two ToF cameras, and 3D shapes are reconstructed from this calibrated system. The main contributions are: (1) We reveal the fact that the frontmost sphere surface point to the ToF camera center is always highlighted, and use this idea to extract sphere centers in the ToF camera images; (2) We propose a unified calibration scheme in spite of the heterogeneity of the sensors. After the calibration, this multi-modal sensor network thus becomes powerful to generate high-quality 3D shapes efficiently.

A Combined Approach for Estimating Patchlets from PMD Depth Images and Stereo Intensity Images

Christian Beder, Bogumil Bartczak and Reinhard Koch, 2007

pdficon_largeReal-time active 3D range cameras based on time-of-flight technology using the Photonic Mixer Device (PMD) can be considered as a complementary technique for stereo-vision based depth estimation. Since those systems directly yield 3D measurements, they can also be used for initializing vision based approaches, especially in highly dynamic environments. Fusion of PMD depth images with passive intensity-based stereo is a promising approach for obtaining reliable surface reconstructions even in weakly textured surface regions. In this work a PMD-stereo fusion algorithm for the estimation of patchlets from a combined PMD-stereo camera rig will be presented. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how those two approaches can be fused into one single estimation, that yields results even if either of the two single approaches fails.

A Comparison of PMD-Cameras and Stereo-Vision for the Task of Surface Reconstruction using Patchlets

Christian Beder, Bogumil Bartczak and Reinhard Koch

pdficon_large Recently real-time active 3D range cameras based on time-of-flight technology (PMD) have become available. Those cameras can be considered as a competing technique
for stereo-vision based surface reconstruction. Since those systems directly yield accurate 3d measurements, they can be used for benchmarking vision based approaches, especially in highly dynamic environments. Therefore, a comparative study of the two approaches is relevant. In this work the achievable accuracy of the two techniques, PMD and  stereo, is compared on the basis of patchlet estimation. As patchlet we define an oriented small planar 3d patch with associated surface normal. Leastsquares estimation schemes for  estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how the achivable accuracy can be estimated for both systems. Experiments  under optimal conditions for both systems are performed and the achievable accuracies are compared. It has been found that the PMD system outperformed the stereo system in terms  of achievable accuracy for distance measurements, while the estimation of normal direction is comparable for both systems.

Calibration and Registration for Precise Surface Reconstruction with TOF Cameras

Stefan Fuchs and Stefan May

pdficon_largeThis paper presents a method for precise surface reconstruction with time-of-flight (TOF) cameras. A novel calibration approach which simplifies the calibration task and doubles the camera’s precision is developed and compared to current calibration methods. Remaining errors are tackled by applying filter and error distributing methods. Thus, a reference object is circumferentially reconstructed with an overall mean precision of approximately 3mm in translation and 3 deg in rotation. The resulting model serves as quantification of achievable
reconstruction precision with TOF cameras. This is a major criteria for the potential analysis of this sensor technology, that is firstly demonstrated within this work.