Li Guan Marc Pollefeys, 2008
In this paper, we propose a unified calibration technique for a heterogeneous sensor network of video camcorders and Time-of-Flight (ToF) cameras. By moving a spherical calibration target around the commonly observed scene, we can robustly and conveniently extract the sphere centers in the observed images and recover the geometric extrinsics for both types of sensors. The approach is then evaluated with a real dataset of two HD camcorders and two ToF cameras, and 3D shapes are reconstructed from this calibrated system. The main contributions are: (1) We reveal the fact that the frontmost sphere surface point to the ToF camera center is always highlighted, and use this idea to extract sphere centers in the ToF camera images; (2) We propose a unified calibration scheme in spite of the heterogeneity of the sensors. After the calibration, this multi-modal sensor network thus becomes powerful to generate high-quality 3D shapes efficiently.
Ingo Schiller, Christian Beder and Reinhard Koch, 2008
We discuss the joint calibration of novel 3D range cameras based on the time-of-flight principle with the Photonic Mixing Device (PMD) and standard 2D CCD cameras. Due to the small field-of-view (fov) and low pixel resolution, PMD-cameras are difficult to calibrate with traditional calibration methods. In addition, the 3D range data contains systematic errors that need to be compensated. Therefore, a calibration method is developed that can estimate full intrinsic calibration of the PMD-camera including optical lens distortions and systematic range errors, and is able to calibrate the external orientation together with multiple 2D cameras that are rigidly coupled to the PMD-camera. The calibration approach is based on a planar checkerboard pattern as calibration reference, viewed from multiple angles and distances. By combining the PMD-camera with standard CCD-cameras the internal camera parameters can be estimated more precisely and the limitations of the small fov can be overcome. Furthermore we use the additional cameras to calibrate the systematic depth measurement error of the PMD-camera. We show that the correlation between rotation and translation estimation is significantly reduced with our method.
Christian Beder and Reinhard Koch
The estimation of position and orientation of a PMD camera in a global reference frame is required by many measurement applications based on such systems. PMD cameras produce a depth as well as a reflectance image of low resolution compared to standard optical cameras, so that calibration of the cameras based on the reflecance image alone is difficult. We will present a novel approach for calibrating the focal length and 3d pose of a PMD camera based on the depth and reflectance image of a planar checkerboard pattern. By integrating both sources of information higher accuracies can be achieved. Furthermore, one single image is sufficient for calibrating the focal length as well as the 3d pose from a planar reference object. This is because the depth measurements are orthogonal to the lateral intensity measurements and provide direct metric information.
Marvin Lindner and Andreas Kolb
A growing number of modern applications such as position determination, online object recognition and collision prevention depend on accurate scene analysis. A low-cost and fast alternative to standard techniques like laser scanners or stereo vision is the distance measurement with modulated, coherent infrared light based on the Photo Mixing Device (PMD) technique. This paper describes an enhanced calibration approach for PMD-based distance sensors, for which highly accurate calibration techniques have not been widely investigated yet. Compared to other known methods, our approach incorporates additional deviation errors related with the active illumination incident to the sensor pixels. The resulting calibration yields significantly more precise distance information. Furthermore, we present a simple to use, vision-based approach for the acquisition of the reference data required by any distance calibration scheme, yielding a light-weighted, on-site calibration system with little expenditure in terms of equipment.
Stefan Fuchs and Gerd Hirzinger
Recently, ToF-cameras have attracted attention because of their ability to generate a full 2 1/2 D depth image at video frame rate. Thus, ToF-cameras are suitable for real-time 3D tasks such as tracking, visual servoing or object pose estimation. The usability of such systems mainly depends on an accurate camera calibration. In this work a calibration process for ToF-cameras with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera to a robot’s endeffector is described. The calibration process is not only based on the monochromatic images of the camera but also uses its depth values that are generated from a chequer-board pattern. The robustness and accuracy of the presented method is assessed applying it to randomly selected shots and comparing the calibrated measurements to a ground truth obtained from a laser scanner.
Stefan Fuchs and Stefan May
This paper presents a method for precise surface reconstruction with time-of-flight (TOF) cameras. A novel calibration approach which simplifies the calibration task and doubles the camera’s precision is developed and compared to current calibration methods. Remaining errors are tackled by applying filter and error distributing methods. Thus, a reference object is circumferentially reconstructed with an overall mean precision of approximately 3mm in translation and 3 deg in rotation. The resulting model serves as quantification of achievable
reconstruction precision with TOF cameras. This is a major criteria for the potential analysis of this sensor technology, that is firstly demonstrated within this work.