1-2hit |
Hironobu TAKANO Hiroki KOBAYASHI Kiyomi NAKAMURA
We previously proposed a rotation-spreading neural network (R-SAN net). This neural net can recognize the orientation of an object irrespective of its shape, and its shape irrespective of its orientation. The R-SAN net is suitable for orientation recognition of a concentric circular pattern such as an iris image. Previously, variations of ambient lighting conditions affected iris detection. In this study, we introduce brightness normalization for accuracy improvement of iris detection in various lighting conditions. Brightness normalization provides high accuracy iris extraction in severe lighting conditions. A recognition experiment investigated the characteristics of rotation and shape recognition for both learned and un-learned iris images in various plane rotations. The R-SAN net recognized the rotation angle for the learned iris images in arbitrary orientation, but not for un-learned iris images. Thus, the variation of the rotation angle was corrected only for learned irises, but not un-learned irises. Although the R-SAN net rightly recognized the learned irises, it could not completely reject the un-learned irises as unregistered irises. Using the specific orientation recognition characteristics of the R-SAN net, a minimum distance was introduced as a new shape recognition criterion for the R-SAN net. In consequence, the R-SAN net combined with the minimum distance rightly recognized both learned (registered) and un-learned irises; the unregistered irises were correctly rejected.
Rina TAGAMI Hiroki KOBAYASHI Shuichi AKIZUKI Manabu HASHIMOTO
Due to the revitalization of the semiconductor industry and efforts to reduce labor and unmanned operations in the retail and food manufacturing industries, objects to be recognized at production sites are increasingly diversified in color and design. Depending on the target objects, it may be more reliable to process only color information, while intensity information may be better, or a combination of color and intensity information may be better. However, there are not many conventional method for optimizing the color and intensity information to be used, and deep learning is too costly for production sites. In this paper, we optimize the combination of the color and intensity information of a small number of pixels used for matching in the framework of template matching, on the basis of the mutual relationship between the target object and surrounding objects. We propose a fast and reliable matching method using these few pixels. Pixels with a low pixel pattern frequency are selected from color and grayscale images of the target object, and pixels that are highly discriminative from surrounding objects are carefully selected from these pixels. The use of color and intensity information makes the method highly versatile for object design. The use of a small number of pixels that are not shared by the target and surrounding objects provides high robustness to the surrounding objects and enables fast matching. Experiments using real images have confirmed that when 14 pixels are used for matching, the processing time is 6.3 msec and the recognition success rate is 99.7%. The proposed method also showed better positional accuracy than the comparison method, and the optimized pixels had a higher recognition success rate than the non-optimized pixels.