1-3hit |
Kei IWASAKI Fujiichi YOSHIMOTO Yoshinori DOBASHI Tomoyuki NISHITA
Caustics are patterns of light focused by reflective or refractive objects. Because of their visually fascinating patterns, several methods have been developed to render caustics. We propose a method for the quick rendering of caustics formed by refracted and converged light through transparent objects. First, in the preprocess, we calculate sampling rays incident on each vertex of the object, and trace the rays until they leave the object taking refraction into account. The position and direction of each ray that finally transmits the transparent object are obtained and stored in a lookup table. Next, in the rendering process, when the object is illuminated, the positions and directions of the rays leaving the object are calculated using the lookup table. This makes it possible to render refractive caustics due to transparent objects at interactive frame rates, allowing us to change the light position and direction, and translate and rotate the object.
We present a novel precomputed radiance transfer method for efficient relighting under all-frequency environment illumination. Environment illumination is represented as a set of environment lights. Each environment light comprises a direction and an intensity. In a preprocessing step, the environment lights are clustered into several clusters, taking into account only the light directions. By experiment, we confirmed that the environment lights can be clustered into a much smaller number of clusters than their original number. Given any environment illumination, sampled as an environment map, an efficient relighting is then achieved by computing the radiance using the precomputed clusters. The proposed method enables relighting under very high-resolution environment illumination. In addition, unlike previous approaches, the proposed method can efficiently perform relighting when some regions of the given environment illumination change.
Pablo GARCIA TRIGO Henry JOHAN Takashi IMAGIRE Tomoyuki NISHITA
We propose an interactive method for assisting the coloring process of 2D hand-drawn animated cartoons. It segments input frames (each hand-drawn drawing of the cartoon) into regions (areas surrounded by closed lines. E.g. the head, the hands) extracts their features, and then matches the regions between frames, allowing the user to fix coloring mistakes interactively. Its main contribution consists in storing matched regions in lists called "chains" for tracking how the region features vary along the animation. Consequently, the matching rate is improved and the matching mistakes are reduced, thus reducing the total effort needed until having a correctly colored cartoon.