The goal of this research, as an integral component of virtual space teleconferencing systems, is to generate three-dimensional facial models from facial images and a to synthesize images of the models virtually viewed from different angles. Since there is a great gap between the images and 3 D model, we argue that it is necessary to have a base face model to provide a framework. The base model is built by carefully selecting and measuring a set of points on the face whose corresponding points can be readily identified in the input images, and another set of points that can be determined from the first point set. The input images are a front view and a side view of the face. First the extremal boundaries are extracted or interpolated, and the face features such as eyes, nose and mouth are extracted. The extracted features are then matched between the two images, and their 3 D positions calculated. Using these 3 D data, the prepared base face model is modified to approximate the face. Finally, images of the modified 3 D model are synthesized by assuming new virtual viewing angles. The originality and significance of this work lies in that the face model can be automatically generated.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Gang XU, Hiroshi AGAWA, Yoshio NAGASHIMA, Fumio KISHINO, Yukio KOBAYASHI, "Three-Dimensional Face Modeling for Virtual Space Teleconferencing systems" in IEICE TRANSACTIONS on transactions,
vol. E73-E, no. 10, pp. 1753-1761, October 1990, doi: .
Abstract: The goal of this research, as an integral component of virtual space teleconferencing systems, is to generate three-dimensional facial models from facial images and a to synthesize images of the models virtually viewed from different angles. Since there is a great gap between the images and 3 D model, we argue that it is necessary to have a base face model to provide a framework. The base model is built by carefully selecting and measuring a set of points on the face whose corresponding points can be readily identified in the input images, and another set of points that can be determined from the first point set. The input images are a front view and a side view of the face. First the extremal boundaries are extracted or interpolated, and the face features such as eyes, nose and mouth are extracted. The extracted features are then matched between the two images, and their 3 D positions calculated. Using these 3 D data, the prepared base face model is modified to approximate the face. Finally, images of the modified 3 D model are synthesized by assuming new virtual viewing angles. The originality and significance of this work lies in that the face model can be automatically generated.
URL: https://globals.ieice.org/en_transactions/transactions/10.1587/e73-e_10_1753/_p
Copy
@ARTICLE{e73-e_10_1753,
author={Gang XU, Hiroshi AGAWA, Yoshio NAGASHIMA, Fumio KISHINO, Yukio KOBAYASHI, },
journal={IEICE TRANSACTIONS on transactions},
title={Three-Dimensional Face Modeling for Virtual Space Teleconferencing systems},
year={1990},
volume={E73-E},
number={10},
pages={1753-1761},
abstract={The goal of this research, as an integral component of virtual space teleconferencing systems, is to generate three-dimensional facial models from facial images and a to synthesize images of the models virtually viewed from different angles. Since there is a great gap between the images and 3 D model, we argue that it is necessary to have a base face model to provide a framework. The base model is built by carefully selecting and measuring a set of points on the face whose corresponding points can be readily identified in the input images, and another set of points that can be determined from the first point set. The input images are a front view and a side view of the face. First the extremal boundaries are extracted or interpolated, and the face features such as eyes, nose and mouth are extracted. The extracted features are then matched between the two images, and their 3 D positions calculated. Using these 3 D data, the prepared base face model is modified to approximate the face. Finally, images of the modified 3 D model are synthesized by assuming new virtual viewing angles. The originality and significance of this work lies in that the face model can be automatically generated.},
keywords={},
doi={},
ISSN={},
month={October},}
Copy
TY - JOUR
TI - Three-Dimensional Face Modeling for Virtual Space Teleconferencing systems
T2 - IEICE TRANSACTIONS on transactions
SP - 1753
EP - 1761
AU - Gang XU
AU - Hiroshi AGAWA
AU - Yoshio NAGASHIMA
AU - Fumio KISHINO
AU - Yukio KOBAYASHI
PY - 1990
DO -
JO - IEICE TRANSACTIONS on transactions
SN -
VL - E73-E
IS - 10
JA - IEICE TRANSACTIONS on transactions
Y1 - October 1990
AB - The goal of this research, as an integral component of virtual space teleconferencing systems, is to generate three-dimensional facial models from facial images and a to synthesize images of the models virtually viewed from different angles. Since there is a great gap between the images and 3 D model, we argue that it is necessary to have a base face model to provide a framework. The base model is built by carefully selecting and measuring a set of points on the face whose corresponding points can be readily identified in the input images, and another set of points that can be determined from the first point set. The input images are a front view and a side view of the face. First the extremal boundaries are extracted or interpolated, and the face features such as eyes, nose and mouth are extracted. The extracted features are then matched between the two images, and their 3 D positions calculated. Using these 3 D data, the prepared base face model is modified to approximate the face. Finally, images of the modified 3 D model are synthesized by assuming new virtual viewing angles. The originality and significance of this work lies in that the face model can be automatically generated.
ER -