1-3hit |
Tomohiro MASHITA Koichi SHINTANI Kiyoshi KIYOKAWA
This paper introduces a user study regarding the effects of hand- and ocular-dominances to pointing gestures. The result of this study is applicable for designing new gesture interfaces which are close to a user's cognition, intuitive, and easy to use. The user study investigates the relationship between the participant's dominances and pointing gestures. Four participant groups—right-handed right-eye dominant, right-handed left-eye dominant, left-handed right-eye dominant and left-handed left-eye dominant—were prepared, and participants were asked to point at the targets on a screen by their left and right hands. The pointing errors among the different participant groups are calculated and compared. The result of this user study shows that using dominant eyes produces better results than using non-dominant eyes and the accuracy increases when the targets are located at the same side of dominant eye. Based on these interesting properties, a method to find the dominant eye for pointing gestures is proposed. This method can find the dominant eye of an individual with more than 90% accuracy.
Kyohei YOSHIKAWA Takashi MACHIDA Kiyoshi KIYOKAWA Haruo TAKEMURA
Displaying a 3D geometric model of a user in real time is an advantage for a telecommunication system because depth information is useful for nonverbal communication such as finger-pointing and gesturing that contain 3D information. However, the range image acquired by a rangefinder suffers from errors due to image noises and distortions in depth measurement. On the other hand, a 2D image is free from such errors. In this paper, we propose a new method for a shared space communication system that combines the advantages of both 2D and 3D representations. A user is represented as a 3D geometric model in order to exchange nonverbal communication cues. A background is displayed as a 2D image to give the user adequate information about the environment of the remote site. Additionally, a high-resolution texture taken by a video camera is projected onto the 3D geometric model of the user. This is done because the low resolution of the image acquired by the rangefinder makes it difficult to exchange facial expressions. Furthermore, to fill in the data occluded by the user, old pixel values are used for the user area in the 2D background image. We have constructed a prototype of a high presence shared space communication system based on our method. Through a number of experiments, we have found that our method is more effective for telecommunication than a method with only a 2D or 3D representation.
Nobuchika SAKATA Kohei KANAMORI Tomu TOMINAGA Yoshinori HIJIKATA Kensuke HARADA Kiyoshi KIYOKAWA
The aim of this study is to calculate optimal walking routes in real space for users partaking in immersive virtual reality (VR) games without compromising their immersion. To this end, we propose a navigation system to automatically determine the route to be taken by a VR user to avoid collisions with surrounding obstacles. The proposed method is evaluated by simulating a real environment. It is verified to be capable of calculating and displaying walking routes to safely guide users to their destinations without compromising their VR immersion. In addition, while walking in real space while experiencing VR content, users can choose between 6-DoF (six degrees of freedom) and 3-DoF (three degrees of freedom). However, we expect users to prefer 3-DoF conditions, as they tend to walk longer while using VR content. In dynamic situations, when two pedestrians are added to a designated computer-generated real environment, it is necessary to calculate the walking route using moving body prediction and display the moving body in virtual space to preserve immersion.