3D Head Tracking Based on Recognition and Interpolation Using a Time-of-Flight Depth Sensor

Salih Burak Göktürk and Carlo Tomasi, 2004

pdficon_largeThis paper describes a head-tracking algorithm that is based on recognition and correlation-based weighted interpolation. The input is a sequence of 3D depth images generated by a novel time-of-flight depth sensor. These are processed to segment the background and foreground, and the latter is used as the input to the head tracking algorithm, which is composed of three major modules: First, a depth signature is created out of the depth images. Next, the signature is compared against signatures that are collected in a training set of depth images. Finally, a correlation metric is calculated between most possible signature hits. The head location is calculated by interpolating among stored depth values, using the correlation metrics as the weights. This combination of depth sensing and recognition-based head tracking provides more than 90 percent success. Even if the track is temporarily lost, it is easily recovered when a good match is obtained from the training set. The use of depth images and recognition-based head tracking achieves robust real-time tracking results under extreme conditions such as 180-degree rotation, temporary occlusions, and complex

Visual Tracking Using Color Cameras and Time-of-Flight Range Imaging Sensors

Leila Sabeti, Ehsan Parvizi, Q.M. Jonathan Wu, 2008

pdficon_largeThis work proposes two particle filter-based visual trackers — one using output images from a color camera and the other using images from a time-of-flight range imaging sensor. These proposed trackers were compared in order to identify the advantages and drawbacks of utilizing output images from the color camera as opposed to output from the time-of-flight range imaging sensor for the most efficient visual tracking. This paper is also unique in its novel mixture of efficient methods to produce two stable and reliable human trackers using the two cameras.