1-9hit |
Shinichiro HIROOKA Hideo SAITO
In this paper, we propose a novel virtual display system for a real object surface by using a video projector, so that the viewer can feel as if digital images are printed on the real surface with arbitrary shape. This system consists of an uncalibrated camera and video projector connected to a same PC and creates a virtual object by rendering 2D contents preserved beforehand onto a white object in a real world via a projector. For geometry registration between the rendered image and the object surface correctly, we regard the object surface as a set of a number of small rectangular regions and perform geometry registration by calculating homographies between the projector image plane and the each divided regions. By using such a homography-based method, we can avoid calibration of a camera and a projector that is necessary in a conventional method. In this system, we perform following two processes. First of all, we acquire the status of the object surface from images which capture the scene that color-coded checker patterns are projected on it and generate image rendered on it without distortion by calculating homographies. After once the projection image is generated, the rendered image can be updated if the object surface moves, or refined when it is stationary by observing the object surface. By this second process, the system always offers more accurate display. In implementation, we demonstrate our system in various conditions. This system enables it to project them as if it is printed on a real paper surface of a book. By using this system, we expect the realization of a virtual museum or other industrial application.
We present a new method in multiresolution rendering of a complex object. Our method uses viewer-centered features including the silhouette in generating multiresolution model. Because the silhouette of an object depends on the position of the viewer, the silhouette has difficulties in real-time generation. We propose the AGSphere for real-time management of the silhouette. The AGSphere easily identifies silhouette parts and manages it in multiresolution manner. The primary applicable feature of the AGSphere is the silhouette from the viewer, but we can also use the AGSphere for other directional features like light silhouette. In this paper, we show experimental results for the silhouette either from the viewer or the light. The efficiency of the proposed method is compared with other methods. We also propose new texture map generation method to use with the multiresolution geometry. Generated texture map has valid mapping function for the multiresolution geometry minimizing texture distortions.
Hyun-Chul SHIN Jin-Aeon LEE Lee-Sup KIM
In texture mapping, anisotropic filtering methods, which require more texels, have been proposed for high-quality images. Memory bandwidth, however, is still limited by a bottleneck in the texture-filtering hardware. In this paper, we propose anisotropic texture filtering based on edge function. In generating the weight that plays a key role in filtering texels loaded from memory, the edge function gives accurate contribution of texels to the pixel intensity. The quality of images is superior to other methods. For images of the same quality, our method requires less than half the texels of other methods. In other words, the improvement in performance is more than twice that of other methods.
Conny GUNADI Hiroyuki SHIMIZU Kazuya KODAMA Kiyoharu AIZAWA
Construction of large-scale virtual environment is gaining more attentions for its applications in virtual mall, virtual sightseeing, tele-presence, etc. This paper presents a framework for building a realistic virtual environment from geometry-based approach. We propose an algorithm to construct a realistic 3-D model from multi-view range data and multi-view texture images. The proposed method tries to adopt the result of region segmentation of range images in some phases of the modeling process. It is shown that the relations obtained from region segmentation are quite effective in improving the result of registration as well as mesh merging.
Footprint assembly was proposed to reduce the blurriness of texture mapped image by mipmapping. Even though it can improve the quality of texture mapped image, there are yet blurring due to the limitation of it's filter kernel. This paper proposes a novel texture filtering, called adaptive footprint assembly (AFA), to overcome the limitation of footprint assembly. The proposed method greatly improves the quality of texture mapped images.
Chung-Yu LIU Tsorng-Lin CHIA Yibin LU
This work presents a novel description of texture mapping polygons in a geometric view about scanlines and a simplified mapping function to improve the performance. The conventional perspective-correct mapping requires costly division operations. In this work, two concepts in perspective geometry, cross-ratio and vanishing point, are exploited to simplify the mapping function. We substitute the point at infinity on scanline into the cross-ratio equation, then obtain a simple description of perspective mapping in polygons. Our mapping function allows the spatial mapping of a pixel from a scanline on a screen plane to a texture plane taking only one division, one multiplication and three additions. The proposed algorithm speeds up the mapping process without losing any correctness. Experimental results indicate that the performance of proposed method is superior to other correct mapping methods.
Jong Hyun LEE Jun Sung KIM Kyu Ho PARK
A method to reduce the bandwidth between texture memory and the rasterization processor is proposed. It achieves the reduction by not fetching useless texels from texture memory in bilinear filtering. Since it does not depend on cache and loss compression, it can be used in applications where the reusability of texels is low and loss compression is prevented.
Katsumi SUIZU Toshiyuki OGAWA Kazuyasu FUJISHIMA
Ever increasing demand for higher bandwidth memories, which is fueled by multimedia and 3D graphics, seems to be somewhat satisfied with various emerging memory solutions. This paper gives a review of these emerging DRAM architectures and a performance comparison based on a condition to let the readers have some perspectives of the future and optimized graphics systems.
Akitoshi TSUKAMOTO Chil-Woo LEE Saburo TSUJI
This paper describes a new method for pose estimation of human face moving abruptly in real world. The virtue of this method is to use a very simple calculation, disparity, among multiple model images, and not to use any facial features such as facial organs. In fact, since the disparity between input image and a model image increases monotonously in accordance with the change of facial pose, view direction, we can estimate pose of face in input image by calculating disparity among various model images of face. To overcome a weakness coming from the change of facial patterns due to facial individuality or expression, the first model image of face is detected by employing a qualitative feature model of frontal face. It contains statistical information about brightness, which are observed from a lot of facial images, and is used in model-based approach. These features are examined in everywhere of input image to calculate faceness" of the region, and a region which indicates the highest faceness" is taken as the initial model image of face. To obtain new model images for another pose of the face, some temporary model images are synthesized through texture mapping technique using a previous model image and a 3-D graphic model of face. When the pose is changed, the most appropriate region for a new model image is searched by calculating disparity using temporary model images. In this serial processes, the obtained model images are used not only as templates for tracking face in following image sequence, but also texture images for synthesizing new temporary model images. The acquired model images are accumulated in memory space and its permissible extent for rotation or scale change is evaluated. In the later of the paper, we show some experimental results about the robustness of the qualitative facial model used to detect frontal face and the pose estimation algorithm tested on a long sequence of real images including moving human face.