1-6hit |
Kazuya OSE Kazunori IWATA Nobuo SUEMATSU
Consider selecting points on a contour in the x-y plane. In shape analysis, this is frequently referred to as contour sampling. It is important to select the points such that they effectively represent the shape of the contour. Generally, the stroke order and number of strokes are informative for that purpose. Several effective methods exist for sampling contours drawn with a certain stroke order and number of strokes, such as the English alphabet or Arabic figures. However, many contours entail an uncertain stroke order and number of strokes, such as pictures of symbols, and little research has focused on methods for sampling such contours. This is because selecting the points in this case typically requires a large computational cost to check all the possible choices. In this paper, we present a sampling method that is useful regardless of whether the contours are drawn with a certain stroke order and number of strokes or not. Our sampling method thereby expands the application possibilities of contour processing. We formulate contour sampling as a discrete optimization problem that can be solved using a type of direct search. Based on a geometric graph whose vertices are the points and whose edges form rectangles, we construct an effective objective function for the problem. Using different shape datasets, we demonstrate that our sampling method is effective with respect to shape representation and retrieval.
Kazunori IWATA Hiroki YAMAMOTO Kazushi MIMURA
Shape matching with local descriptors is an underlying scheme in shape analysis. We can visually confirm the matching results and also assess them for shape classification. Generally, shape matching is implemented by determining the correspondence between shapes that are represented by their respective sets of sampled points. Some matching methods have already been proposed; the main difference between them lies in their choice of matching cost function. This function measures the dissimilarity between the local distribution of sampled points around a focusing point of one shape and the local distribution of sampled points around a referring point of another shape. A local descriptor is used to describe the distribution of sampled points around the point of the shape. In this paper, we propose an extended scheme for shape matching that can compensate for errors in existing local descriptors. It is convenient for local descriptors to adopt our scheme because it does not require the local descriptors to be modified. The main idea of our scheme is to consider the correspondence of neighboring sampled points to a focusing point when determining the correspondence of the focusing point. This is useful because it increases the chance of finding a suitable correspondence. However, considering the correspondence of neighboring points causes a problem regarding computational feasibility, because there is a substantial increase in the number of possible correspondences that need to be considered in shape matching. We solve this problem using a branch-and-bound algorithm, for efficient approximation. Using several shape datasets, we demonstrate that our scheme yields a more suitable matching than the conventional scheme that does not consider the correspondence of neighboring sampled points, even though our scheme requires only a small increase in execution time.
Kazunori IWATA Kazushi IKEDA Hideaki SAKAI
We regard the events of a Markov decision process as the outputs from a Markov information source in order to analyze the randomness of an empirical sequence by the codeword length of the sequence. The randomness is an important viewpoint in reinforcement learning since the learning is to eliminate the randomness and to find an optimal policy. The occurrence of optimal empirical sequence also depends on the randomness. We then introduce the Lempel-Ziv coding for measuring the randomness which consists of the domain size and the stochastic complexity. In experimental results, we confirm that the learning and the occurrence of optimal empirical sequence depend on the randomness and show the fact that in early stages the randomness is mainly characterized by the domain size and as the number of time steps increases the randomness depends greatly on the complexity of Markov decision processes.
The nearest neighbor method is a simple and flexible scheme for the classification of data points in a vector space. It predicts a class label of an unseen data point using a majority rule for the labels of known data points inside a neighborhood of the unseen data point. Because it sometimes achieves good performance even for complicated problems, several derivatives of it have been studied. Among them, the discriminant adaptive nearest neighbor method is particularly worth revisiting to demonstrate its application. The main idea of this method is to adjust the neighbor metric of an unseen data point to the set of known data points before label prediction. It often improves the prediction, provided the neighbor metric is adjusted well. For statistical shape analysis, shape classification attracts attention because it is a vital topic in shape analysis. However, because a shape is generally expressed as a matrix, it is non-trivial to apply the discriminant adaptive nearest neighbor method to shape classification. Thus, in this study, we develop the discriminant adaptive nearest neighbor method to make it slightly more useful in shape classification. To achieve this development, a mixture model and optimization algorithm for shape clustering are incorporated into the method. Furthermore, we describe several helpful techniques for the initial guess of the model parameters in the optimization algorithm. Using several shape datasets, we demonstrated that our method is successful for shape classification.
Katsutoshi UEAOKI Kazunori IWATA Nobuo SUEMATSU Akira HAYASHI
A two-dimensional shape is generally represented with line drawings or object contours in a digital image. Shapes can be divided into two types, namely ordered and unordered shapes. An ordered shape is an ordered set of points, while an unordered shape is an unordered set. As a result, each type typically uses different attributes to define the local descriptors involved in representing the local distributions of points sampled from the shape. Throughout this paper, we focus on unordered shapes. Since most local descriptors of unordered shapes are not scale-invariant, we usually make the shapes in an image data set the same size through scale normalization, before applying shape matching procedures. Shapes obtained through scale normalization are suitable for such descriptors if the original whole shapes are similar. However, they are not suitable if parts of each original shape are drawn using different scales. Thus, in this paper, we present a scale-invariant descriptor constructed by von Mises distributions to deal with such shapes. Since this descriptor has the merits of being both scale-invariant and a probability distribution, it does not require scale normalization and can employ an arbitrary measure of probability distributions in matching shape points. In experiments on shape matching and retrieval, we show the effectiveness of our descriptor, compared to several conventional descriptors.
Kosei KURISU Nobuo SUEMATSU Kazunori IWATA Akira HAYASHI
In image segmentation, finite mixture modeling has been widely used. In its simplest form, the spatial correlation among neighboring pixels is not taken into account, and its segmentation results can be largely deteriorated by noise in images. We propose a spatially correlated mixture model in which the mixing proportions of finite mixture models are governed by a set of underlying functions defined on the image space. The spatial correlation among pixels is introduced by putting a Gaussian process prior on the underlying functions. We can set the spatial correlation rather directly and flexibly by choosing the covariance function of the Gaussian process prior. The effectiveness of our model is demonstrated by experiments with synthetic and real images.