1-3hit |
Tatsuya NOBUNAGA Toshiaki WATANABE Hiroya TANAKA
Individuals can be identified by features extracted from an electrocardiogram (ECG). However, irregular palpitations due to stress or exercise decrease the identification accuracy due to distortion of the ECG waveforms. In this letter, we propose a human identification scheme based on the frequency spectrums of an ECG, which can successfully extract features and thus identify individuals even while exercising. For the proposed scheme, we demonstrate an accuracy rate of 99.8% in a controlled experiment with exercising subjects. This level of accuracy is achieved by determining the significant features of individuals with a random forest classifier. In addition, the effectiveness of the proposed scheme is verified using a publicly available ECG database. We show that the proposed scheme also achieves a high accuracy with this public database.
Pramual CHOORAT Werapon CHIRACHARIT Kosin CHAMNONGTHAI Takao ONOYE
In tooth contour extraction there is insufficient intensity difference in x-ray images between the tooth and dental bone. This difference must be enhanced in order to improve the accuracy of tooth segmentation. This paper proposes a method to improve the intensity between the tooth and dental bone. This method consists of an estimation of tooth orientation (intensity projection, smoothing filter, and peak detection) and PCA-Stacked Gabor with ellipse Gabor banks. Tooth orientation estimation is performed to determine the angle of a single oriented tooth. PCA-Stacked Gabor with ellipse Gabor banks is then used, in particular to enhance the border between the tooth and dental bone. Finally, active contour extraction is performed in order to determine tooth contour. In the experiment, in comparison with the conventional active contour without edge (ACWE) method, the average mean square error (MSE) values of extracted tooth contour points are reduced from 26.93% and 16.02% to 19.07% and 13.42% for tooth x-ray type I and type H images, respectively.
Jinil HONG Woo Suk YANG Dongmin KIM Young-Ju KIM
In this paper, we introduce a new technology to extract the unique features from an iris image, which uses scale-space filtering. Resulting iris code can be used to develop a system for rapid and automatic human identification with high reliability and confidence levels. First, an iris part is separated from the whole image and the radius and center of the iris are evaluated. Next, the regions that have a high possibility of being noise are discriminated and the features presented in the highly detailed pattern are then extracted. In order to conserve the original signal while minimizing the effect of noise, scale-space filtering is applied. Experiments are performed using a set of 272 iris images taken from 18 persons. Test results show that the iris feature patterns of different persons are clearly discriminated from those of the same person.