A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, 2007
In this paper we describe a joint approach for robot navigation with collision avoidance, pose estimation and map building with a 2.5D PMD (Photonic Mixer Device)-Camera combined with a high-resolution spherical camera. The cameras are mounted at the front of the robot with a certain inclination angle. The navigation and map building consists of two steps: When entering new terrain the robot first scans the surrounding. Simultaneously a 3D-panorama is generated from the PMD-images. In the second step the robot moves along the predefined path, using the PMD-camera for collision avoidance and a combined Structure-from-Motion (SfM) and model-tracking approach for self-localization. The computed poses of the robot are simultaneously used for map building with new measurements from the PMD-camera.
Christian Beder, Bogumil Bartczak and Reinhard Koch, 2007
Real-time active 3D range cameras based on time-of-flight technology using the Photonic Mixer Device (PMD) can be considered as a complementary technique for stereo-vision based depth estimation. Since those systems directly yield 3D measurements, they can also be used for initializing vision based approaches, especially in highly dynamic environments. Fusion of PMD depth images with passive intensity-based stereo is a promising approach for obtaining reliable surface reconstructions even in weakly textured surface regions. In this work a PMD-stereo fusion algorithm for the estimation of patchlets from a combined PMD-stereo camera rig will be presented. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how those two approaches can be fused into one single estimation, that yields results even if either of the two single approaches fails.