1-5hit |
Chia-Hsiang WU Yung-Nien SUN Yi-Chiao CHEN Chien-Chen CHANG
In this study, we introduce a software pipeline to track feature points across endoscopic video frames. It deals with the common problems of low contrast and uneven illumination that afflict endoscopic imaging. In particular, irregular feature trajectories are eliminated to improve quality. The structure of soft tissue is determined by an iterative factorization method based on collection of tracked features. A shape updating mechanism is proposed in order to yield scale-invariant structures. Experimental results show that the tracking method produced good tracking performance and increased the number of tracked feature trajectories. The real scale and structure of the target scene was successfully estimated, and the recovered structure is more accuracy than the conventional method.
Localization of a vehicle is a key component for driving assistance or autonomous navigation. In this work, we propose a visual positioning system (VPS) for vehicle or mobile robot navigation. Different from general landmark-based or model-based approaches, which rely on some predefined known landmarks or a priori information about the environment, no assumptions on the prior knowledge of the scene are made. A stereo-based vision system is built for both extracting feature correspondences and recovering 3-D information of the scene from image sequences. Relative positions of the camera motion are then estimated by registering the 3-D feature points from two consecutive image frames. Localization of the mobile platform is finally given by the reference to its initial position.
Yasuyuki SUGAYA Kenichi KANATANI
Feature point tracking over a video sequence fails when the points go out of the field of view or behind other objects. In this paper, we extend such interrupted tracking by imposing the constraint that under the affine camera model all feature trajectories should be in an affine space. Our method consists of iterations for optimally extending the trajectories and for optimally estimating the affine space, coupled with an outlier removal process. Using real video images, we demonstrate that our method can restore a sufficient number of trajectories for detailed 3-D reconstruction.
Yasuyuki SUGAYA Kenichi KANATANI
Many feature tracking algorithms have been proposed for motion segmentation, but the resulting trajectories are not necessarily correct. In this paper, we propose a technique for removing outliers based on the knowledge that correct trajectories are constrained to be in a subspace of their domain. We first fit an appropriate subspace to the detected trajectories using RANSAC and then remove outliers by considering the error behavior of actual video tracking. Using real video sequences, we demonstrate that our method can be applied if multiple motions exist in the scene. We also confirm that the separation accuracy is indeed improved by our method.
Liyanage C. DE SILVA Kiyoharu AIZAWA Mitsutoshi HATORI
In this paper face feature detection and tracking are discussed, using methods called edge pixel counting and deformable circular template matching. Instead of utilizing color or gray scale information of the facial image, the proposed edge pixel counting method utilizes the edge information to estimate the face feature positions such as eyes, nose and mouth, using a variable size face feature template, the initial size of which is predetermined by using a facial image database. The method is robust in the sense that the detection is possible with facial images with different skin color and different facial orientations. Subsequently, by using a deformable circular template matching two iris positions of the face are determined and are used in the edge pixel counting, to track the features in the next frame. Although feature tracking using gray scale template matching often fails when inter frame correlation around the feature areas are very low due to facial expression change (such as, talking, smiling, eye blinking etc.), feature tracking using edge pixel counting can track facial features reliably. Some experimental results are shown to demonstrate the effectiveness of the proposed method.