Author Search Result

[Author] Hisashi KURASAWA(3hit)

1-3hit
  • Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval

    Hisashi KURASAWA  Atsuhiro TAKASU  Jun ADACHI  

     
    PAPER-Contents Technology and Web Information Systems

      Vol:
    E92-D No:10
      Page(s):
    2064-2072

    Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30,000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.

  • Optimal Pivot Selection Method Based on the Partition and the Pruning Effect for Metric Space Indexes

    Hisashi KURASAWA  Daiji FUKAGAWA  Atsuhiro TAKASU  Jun ADACHI  

     
    PAPER

      Vol:
    E94-D No:3
      Page(s):
    504-514

    This paper proposes a new method to reduce the cost of nearest neighbor searches in metric spaces. Many similarity search indexes recursively divide a region into subregions by using pivots, and construct a tree-structured index. Most of recently developed indexes focus on pruning objects and do not pay much attention to the tree balancing. As a result, indexes having imbalanced tree-structure may be constructed and the search cost is degraded. We propose a similarity search index called the Partitioning Capacity (PC) Tree. It selects the optimal pivot in terms of the PC that quantifies the balance of the regions partitioned by a pivot as well as the estimated effectiveness of the search pruning by the pivot. As a result, PCTree reduces the search cost for various data distributions. We experimentally compared PCTree with four indexes using synthetic data and five real datasets. The experimental results shows that the PCTree successfully reduces the search cost.

  • Margin-Based Pivot Selection for Similarity Search Indexes

    Hisashi KURASAWA  Daiji FUKAGAWA  Atsuhiro TAKASU  Jun ADACHI  

     
    PAPER-Multimedia Databases

      Vol:
    E93-D No:6
      Page(s):
    1422-1432

    When developing an index for a similarity search in metric spaces, how to divide the space for effective search pruning is a fundamental issue. We present Maximal Metric Margin Partitioning (MMMP), a partitioning scheme for similarity search indexes. MMMP divides the data based on its distribution pattern, especially for the boundaries of clusters. A partitioning boundary created by MMMP is likely to be located in a sparse area between clusters. Moreover, the partitioning boundary is at maximum distances from the two cluster edges. We also present an indexing scheme, named the MMMP-Index, which uses MMMP and pivot filtering. The MMMP-Index can prune many objects that are not relevant to a query, and it reduces the query execution cost. Our experimental results show that MMMP effectively indexes clustered data and reduces the search cost. For clustered data in a vector space, the MMMP-Index reduces the computational cost to less than two thirds that of comparable schemes.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.