Author Search Result

[Author] Hiroyuki OHSAKI(20hit)

1-20hit
  • Contact Duration-Aware Epidemic Broadcasting in Delay/Disruption-Tolerant Networks Open Access

    Kohei WATABE  Hiroyuki OHSAKI  

     
    PAPER-Network

      Vol:
    E98-B No:12
      Page(s):
    2389-2399

    DTNs (Delay/Disruption-Tolerant Networks) composed of mobile nodes in low node-density environments have attracted considerable attention in recent years. In this paper, we propose a CD-BCAST (Contact Duration BroadCAST) mechanism that can reduce the number of message forwardings while maintaining short message delivery delays in DTNs composed of mobile nodes. The key idea behind CD-BCAST is to increase the probability of simultaneous forwarding by intentionally delaying message forwarding based on the contact duration distribution measured by each node. Through simulations, we show that CD-BCAST needs substantially less message forwardings than conventional mechanisms and it does not require parameter tuning under varieties of communication ranges and node densities.

  • Analysis of Blacklist Update Frequency for Countering Malware Attacks on Websites

    Takeshi YAGI  Junichi MURAYAMA  Takeo HARIU  Sho TSUGAWA  Hiroyuki OHSAKI  Masayuki MURATA  

     
    PAPER-Internet

      Vol:
    E97-B No:1
      Page(s):
    76-86

    We proposes a method for determining the frequency for monitoring the activities of a malware download site used for malware attacks on websites. In recent years, there has been an increase in attacks exploiting vulnerabilities in web applications for infecting websites with malware and maliciously using those websites as attack platforms. One scheme for countering such attacks is to blacklist malware download sites and filter out access to them from user websites. However, a malware download site is often constructed through the use of an ordinary website that has been maliciously manipulated by an attacker. Once the malware has been deleted from the malware download site, this scheme must be able to unblacklist that site to prevent normal user websites from being falsely detected as malware download sites. However, if a malware download site is frequently monitored for the presence of malware, the attacker may sense this monitoring and relocate that malware on a different site. This means that an attack will not be detected until the newly generated malware download site is discovered. In response to these problems, we clarify the change in attack-detection accuracy caused by attacker behavior. This is done by modeling attacker behavior, specifying a state-transition model with respect to the blacklisting of a malware download site, and analyzing these models with synthetically generated attack patterns and measured attack patterns in an operation network. From this analysis, we derive the optimal monitoring frequency that maximizes the true detection rate while minimizing the false detection rate.

  • Performance Evaluation and Parameter Tuning of TCP over ABR Service in ATM Networks

    Go HASEGAWA  Hiroyuki OHSAKI  Masayuki MURATA  Hideo MIYAHARA  

     
    PAPER

      Vol:
    E79-B No:5
      Page(s):
    668-683

    Rate-based congestion control is a promising scheme as data transfer service in ATM networks, and has been standardized in the ATM Forum. To migrate the existing upper layer protocols to ATM networks, however, further investigation is necessary. In particular, when ABR service class is applied to TCP (Transmission Control Protocol), duality of congestion control schemes in different protocol layers, i.e., conventional window-based congestion control in the Transport layer and ratebased congestion control in the ATM layer, may have a unexpected influence on performance. As an alternative approach for supporting TCP protocol, EPD (Early Packet Discard) has been recently proposed, which adds the function to the UBR (Unspecified Bit Rate) service. It does not have a "duality problem" since EPD only discards cells selectively to improve packet-level performance. In this paper, we exhibit performance of TCP protocol over ATM networks by using a simulation technique. We first compare rate-based control of ABR service and EPD applied to UBR service, and show that rate-control achieves better fairness and higher throughput in most circumstances. However, rate-based control requires careful tuning of control parameters to obtain its effectiveness and a duality problem leads to unexpected degradation of TCP-level performance. By the rate-based congestion control, temporal congestion at the switch is quickly relieved by the rate down of the source terminals. However, our simulation explores that if the parameter set of the rate-based congestion control is not appropriately used, the congestion is also recognized at TCP due to packet drops and TCP unnecessarily throttles its window size. To avoid this sort of the problem, we develop the appropriate parameter set suitable to TCP on ABR service, and point out that some modification of TCP may be necessary for further performance improvement.

  • Analysis of a Window-Based Flow Control Mechanism Based on TCP Vegas in Heterogeneous Network Environment

    Keiichi TAKAGAKI  Hiroyuki OHSAKI  Masayuki MURATA  

     
    PAPER

      Vol:
    E85-B No:1
      Page(s):
    89-97

    A feedback-based congestion control mechanism is essential to realize an efficient data transfer service in packed-switched networks. TCP (Transmission Control Protocol) is a feedback-based congestion control mechanism, and has been widely used in the current Internet. An improved version of TCP called TCP Vegas has been proposed and studied in the literature. It can achieve better performance than TCP Reno. In previous studies, performance analysis of a window-based flow control mechanism based on TCP Vegas only for a simple network topology has been performed. In this paper, we extend the analysis to a generic network topology where each connection is allowed to have a different propagation delay and to traverse multiple bottleneck links. We first derive equilibrium values of window sizes of TCP connections and the number of packets waiting in a router's buffer. We also derive throughput of each TCP connection in steady state, and investigate the effect of control parameters of TCP Vegas on fairness among TCP connections. We then present several numerical examples, showing how control parameters of TCP Vegas should be configured for achieving both stability and better transient performance.

  • Design Algorithm for Virtual Path Based ATM Networks

    Byung Han RYU  Hiroyuki OHSAKI  Masayuki MURATA  Hideo MIYAHAEA  

     
    PAPER-Communication Networks and Services

      Vol:
    E79-B No:2
      Page(s):
    97-107

    An ATM network design algorithm is treated as a resource allocation problem. As an effective way to facilitate a coexistence of traffic with its diverse characteristics and different quality of service (QOS) requirements in ATM networks, a virtual path (VP) concept has been proposed. In attempting to design the VP (Virtual Path)-based ATM network, it requires to consider a network topology and traffic pattern generated from users for minimizing a network construction cost while satisfying QOS requirements such as cell / call loss probabilities and cell delay times. In this paper, we propose a new heuristic design algorithm for the VP-based ATM network under QOS constraints. A minimum bandwidth required to transfer a given amount of traffic is first obtained by utilizing an equivalent bandwidth method. After all the routes of VPs are temporarily established by means of the shortest paths, we try to minimize the network cost through the alternation of VP route, the separation of a single VP into several VPs, and the introduction of VCX nodes. To evaluate our design algorithm, we consider two kinds of traffic; voice traffic as low speed service and still picture traffic as high speed service. Through numerical examples, we demonstrate that our design method can achieve an efficient use of network resources, which results in the cost-effective VP-based ATM network.

  • On Scaling Property of Information-Centric Networking

    Ryo NAKAMURA  Hiroyuki OHSAKI  

     
    PAPER

      Pubricized:
    2019/03/22
      Vol:
    E102-B No:9
      Page(s):
    1804-1812

    In this paper, we focus on a large-scale ICN (Information-Centric Networking), and reveal the scaling property of ICN. Because of in-network content caching, ICN is a sort of cache networks and expected to be a promising architecture for replacing future Internet. To realize a global-scale (e.g., Internet-scale) ICN, it is crucial to understand the fundamental properties of such large-scale cache networks. However, the scaling property of ICN has not been well understood due to the lack of theoretical foundations and analysis methodologies. For answering research questions regarding the scaling property of ICN, we derive the cache hit probability at each router, the average content delivery delay of each entity, and the average content delivery delay of all entities over a content distribution tree comprised of a single repository (i.e., content provider), multiple routers, and multiple entities (i.e., content consumers). Through several numerical examples, we investigate the effect of the topology and the size of the content distribution tree and the cache size at routers on the average content delivery delay of all entities. Our findings include that the average content delivery delay of ICNs converges to a constant value if the cache size of routers are not small, which implies high scalability of ICNs, and that even when the network size would grow indefinitely, the average content delivery delay is upper-bounded by a constant value if routers in the network are provided with a fair amount of content caches.

  • GridFTP-APT: Automatic Parallelism Tuning Mechanism for GridFTP in Long-Fat Networks

    Takeshi ITO  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Network

      Vol:
    E91-B No:12
      Page(s):
    3925-3936

    In this paper, we propose an extension to GridFTP that optimizes its performance by dynamically adjusting the number of parallel TCP connections. GridFTP has been used as a data transfer protocol to effectively transfer a large volume of data in Grid computing. GridFTP supports a feature called parallel data transfer that improves throughput by establishing multiple TCP connections in parallel. However, for achieving high GridFTP throughput, the number of TCP connections should be optimized based on the network status. In this paper, we propose an automatic parallelism tuning mechanism called GridFTP-APT (GridFTP with Automatic Parallelism Tuning) that adjusts the number of parallel TCP connections according to information available to the Grid middleware. Through simulations, we demonstrate that GridFTP-APT significantly improves the performance of GridFTP in various network environments.

  • Graph Degree Heterogeneity Facilitates Random Walker Meetings

    Yusuke SAKUMOTO  Hiroyuki OHSAKI  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2020/12/14
      Vol:
    E104-B No:6
      Page(s):
    604-615

    Various graph algorithms have been developed with multiple random walks, the movement of several independent random walkers on a graph. Designing an efficient graph algorithm based on multiple random walks requires investigating multiple random walks theoretically to attain a deep understanding of their characteristics. The first meeting time is one of the important metrics for multiple random walks. The first meeting time on a graph is defined by the time it takes for multiple random walkers to meet at the same node in a graph. This time is closely related to the rendezvous problem, a fundamental problem in computer science. The first meeting time of multiple random walks has been analyzed previously, but many of these analyses focused on regular graphs. In this paper, we analyze the first meeting time of multiple random walks in arbitrary graphs and clarify the effects of graph structures on expected values. First, we derive the spectral formula of the expected first meeting time on the basis of spectral graph theory. Then, we examine the principal component of the expected first meeting time using the derived spectral formula. The clarified principal component reveals that (a) the expected first meeting time is almost dominated by $n/(1+d_{ m std}^2/d_{ mavg}^2)$ and (b) the expected first meeting time is independent of the starting nodes of random walkers, where n is the number of nodes of the graph. davg and dstd are the average and the standard deviation of weighted node degrees, respectively. Characteristic (a) is useful for understanding the effect of the graph structure on the first meeting time. According to the revealed effect of graph structures, the variance of the coefficient dstd/davg (degree heterogeneity) for weighted degrees facilitates the meeting of random walkers.

  • A Fast Packet Loss Detection Mechanism for Content-Centric Networking

    Ryo NAKAMURA  Hiroyuki OHSAKI  

     
    PAPER

      Pubricized:
    2019/03/22
      Vol:
    E102-B No:9
      Page(s):
    1842-1852

    In this paper, we propose a packet loss detection mechanism called Interest ACKnowledgement (ACK). Interest ACK provides information on the history of successful Interest packet receptions at a repository (i.e., content provider); this information is conveyed to the corresponding entity (i.e., content consumer) via the header of Data packets. Interest ACKs enable the entity to quickly and accurately detect Interest and Data packet losses in the network. We conduct simulations to investigate the effectiveness of Interest ACKs under several scenarios. Our results show that Interest ACKs are effective for improving the adaptability and stability of CCN with window-based flow control and that packet losses at the repository can be reduced by 10%-20%. Moreover, by extending Interest ACK, we propose a lossy link detection mechanism called LLD-IA (Lossy Link Detection with Interest ACKs), which is a mechanism for an entity to estimate the link where the packet was discarded in a network. Also, we show that LLD-IA can effectively detect links where packets were discarded under moderate packet loss ratios through simulation.

  • Improving Robustness of XCP (eXplicit Control Protocol) for Dynamic Traffic

    Yusuke SAKUMOTO  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Network

      Vol:
    E93-B No:11
      Page(s):
    3013-3022

    In this paper, we reveal inherent robustness issues of XCP (eXplicit Control Protocol), and propose extensions to XCP for increasing its robustness. XCP has been proposed as an efficient transport-layer protocol for wide-area and high-speed network. XCP is a transport-layer protocol that performs congestion control based on explicit feedback from routers. In the literature, many performance studies of XCP have been performed. However, the effect of traffic dynamics on the XCP performance has not been fully investigated. In this paper, through simulation experiments, we first show that XCP has the following problems: (1) the bottleneck link utilization is lowered against XCP traffic dynamics, and (2) operation of XCP becomes unstable in a network with both XCP and non-XCP traffic. We then propose XCP-IR (XCP with Increased Robustness) that operates efficiently even for dynamic XCP and non-XCP traffic.

  • Performance Improvement of TCP over EFCI-Based ABR Service Class by Tuning of Congestion Control Parameters

    Go HASEGAWA  Hiroyuki OHSAKI  Masayuki MURATA  Hideo MIYAHARA  

     
    PAPER-Communication protocol

      Vol:
    E80-B No:10
      Page(s):
    1444-1453

    We investigate performance of TCP protocol over ATM networks by using a simulation technique. As the ATM layer, we consider (1) rate-based control of the ABR service class and (2) an EPD (Early Packet Discard) technique applied to the UBR service class and (3) and EPD with per-VC accounting for fairness enhancement applied to the UBR service class. In comparison, we adopt a multi-hop network model where the multiple ATM switches are interconnected. In such a network, unfairness among connections is a possible cause of the problem due to differences of the number of hops and/or the round trip times among connections. Simulation results show that the rate-based control method of ABR achieves highest throughput and best fairness in most circumstances. However, the performance of TCP over ABR is degraded once the cell loss takes place due to the inappropriate control parameter setting. To avoid this performance degradation, we investigate the appropriate parameter set suitable to TCP on ABR service. As a result, parameter tuning can improve the performance of TCP over ABR, but limited. We therefore consider TCP over ABR with EPD enhancement where the EPD technique is incorporated into ABR. We last consider the multimedia network environment, where the VBR traffic exists in the network in addition to the ABR/UBR traffic. By this, we investigate an applicability of the above observations to a more generic model. Through simulation experiments, we find that the similar results can be obtained, but it is also shown that parameters of the rate-based congestion control must be chosen carefully by taking into account the existence of VBR traffic. For this, we discuss the method to determine the appropriate control parameters.

  • Lightweight and Distributed Connectivity-Based Clustering Derived from Schelling's Model

    Sho TSUGAWA  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER

      Vol:
    E95-B No:8
      Page(s):
    2549-2557

    In the literature, two connectivity-based distributed clustering schemes exist: CDC (Connectivity-based Distributed node Clustering scheme) and SDC (SCM-based Distributed Clustering). While CDC and SDC have mechanisms for maintaining clusters against nodes joining and leaving, neither method assumes that frequent changes occur in the network topology. In this paper, we propose a lightweight distributed clustering method that we term SBDC (Schelling-Based Distributed Clustering) since this scheme is derived from Schelling's model – a popular segregation model in sociology. We evaluate the effectiveness of the proposed SBDC in an environment where frequent changes arise in the network topology. Our simulation results show that SBDC outperforms CDC and SDC under frequent changes in network topology caused by high node mobility.

  • Evaluations and Analysis of Malware Prevention Methods on Websites

    Takeshi YAGI  Junichi MURAYAMA  Takeo HARIU  Hiroyuki OHSAKI  

     
    PAPER-Internet

      Vol:
    E96-B No:12
      Page(s):
    3091-3100

    With the diffusion of web services caused by the appearance of a new architecture known as cloud computing, a large number of websites have been used by attackers as hopping sites to attack other websites and user terminals because many vulnerable websites are constructed and managed by unskilled users. To construct hopping sites, many attackers force victims to download malware by using vulnerabilities in web applications. To protect websites from these malware infection attacks, conventional methods, such as using anti-virus software, filter files from attackers using pattern files generated by analyzing conventional malware files collected by security vendors. In addition, certain anti-virus software uses a behavior blocking approach, which monitors malicious file activities and modifications. These methods can detect malware files that are already known. However, it is difficult to detect malware that is different from known malware. It is also difficult to define malware since legitimate software files can become malicious depending on the situation. We previously proposed an access filtering method based on communication opponents, which are other servers or terminals that connect with our web honeypots, of attacks collected by web honeypots, which collect malware infection attacks to websites by using actual vulnerable web applications. In this blacklist-based method, URLs or IP addresses, which are used in malware infection attacks collected by web honeypots, are listed in a blacklist, and accesses to and from websites are filtered based on the blacklist. To reveal the effects in an actual attack situation on the Internet, we evaluated the detection ratio of anti-virus software, our method, and a composite of both methods. Our evaluation revealed that anti-virus software detected approximately 50% of malware files, our method detected approximately 98% of attacks, and the composite of the two methods could detect approximately 99% of attacks.

  • Delay Performance Analysis on Ad-Hoc Delay Tolerant Broadcast Network Applied to Vehicle-to-Vehicle Communication

    Satoshi HASEGAWA  Yusuke SAKUMOTO  Mirai WAKABAYASHI  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER

      Vol:
    E92-B No:3
      Page(s):
    728-736

    The research on Delay Tolerant Networks (DTN) has been activated aiming at a variety of potential applications toward ubiquitous society. The vehicular network is one of the promising areas for applying DTN. In this paper, the end-to-end delay characteristics for vehicular ad-hoc broadcast DTN is analyzed, where the dynamic vehicle mobility model is introduced. The analysis is applied to the two realistic road models, that is, one-way and two-way traffic models. The simulation study demonstrates validity of our analysis for use in generic vehicle networks.

  • Stability Analysis of XCP (eXplicit Control Protocol) with Heterogeneous Flows

    Yusuke SAKUMOTO  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Internet

      Vol:
    E92-B No:10
      Page(s):
    3174-3182

    In this paper, we analyze the stability of XCP (eXplicit Control Protocol) in a network with heterogeneous XCP flows (i.e., XCP flows with different propagation delays). Specifically, we model a network with heterogeneous XCP flows using fluid-flow approximation. We then derive the conditions that XCP control parameters should satisfy for stable XCP operation. Furthermore, through several numerical examples and simulation results, we quantitatively investigate effect of system parameters and XCP control parameters on stability of the XCP protocol. Our findings include: (1) when XCP flows are heterogeneous, XCP operates more stably than the case when XCP flows are homogeneous, (2) conversely, when variation in propagation delays of XCP flows is large, operation of XCP becomes unstable, and (3) the output link bandwidth of an XCP router is independent of stability of the XCP protocol.

  • Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    Yusuke SAKUMOTO  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E95-B No:5
      Page(s):
    1592-1601

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M + N) log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M + N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(M α(N) + N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  • Estimating Node Characteristics from Topological Structure of Social Networks

    Kouhei SUGIYAMA  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E92-B No:10
      Page(s):
    3094-3101

    In this paper, for systematically evaluating estimation methods of node characteristics, we first propose a social network generation model called LRE (Linkage with Relative Evaluation). LRE is a network generation model, which aims to reproduce the characteristics of a social network. LRE utilizes the fact that people generally build relationships with others based on relative evaluation, rather than absolute evaluation. We then extensively evaluate the accuracy of the estimation method called SSI (Structural Superiority Index). We reveal that SSI is effective for finding good nodes (e.g., top 10% nodes), but cannot be used for finding excellent nodes (e.g., top 1% nodes). For alleviating the problems of SSI, we propose a novel scheme for enhancing existing estimation methods called RENC (Recursive Estimation of Node Characteristic). RENC reduces the effect of noise by recursively estimating node characteristics. By investigating the estimation accuracy with RENC, we show that RENC is quite effective for improving the estimation accuracy in practical situations.

  • Performance Analysis of Content-Centric Networking on an Arbitrary Network Topology

    Ryo NAKAMURA  Hiroyuki OHSAKI  

     
    PAPER

      Pubricized:
    2017/07/05
      Vol:
    E101-B No:1
      Page(s):
    24-34

    In this paper, we use the MCA (Multi-Cache Approximation) algorithm to numerically determine cache hit probability in a multi-cache network. We then analytically obtain performance metrics for Content-Centric networking (CCN). Our analytical model contains multiple routers, multiple repositories (e.g., storage servers), and multiple entities (e.g., clients). We obtain three performance metrics: content delivery delay (i.e., the average time required for an entity to retrieve a content through a neighboring router), throughput (i.e., number of contents delivered from an entity per unit of time), and availability (i.e., probability that an entity can successfully retrieve a content from a network). Through several numerical examples, we investigate how network topology affects the performance of CCN. A notable finding is that content caching becomes more beneficial in terms of content delivery time and availability (resp., throughput) as distance between the entity and the requesting repository narrows (resp., widens).

  • Stepping-Random Code: A Rateless Erasure Code for Short-Length Messages

    Zan-Kai CHONG  Bok-Min GOI  Hiroyuki OHSAKI  Bryan Cheng-Kuan NG  Hong-Tat EWE  

     
    PAPER

      Vol:
    E96-B No:7
      Page(s):
    1764-1771

    Rateless erasure code is an error correction code that is able to encode a message of k uncoded symbols into an infinite number of coded symbols. One may reconstruct the original message from any k(1+ε) coded symbols, where ε denotes the decoding inefficiency. This paper proposes a hybrid code that combines the stepping code and random code and name it as Stepping-Random (SR) code. The Part I (first k) coded symbols of SR code are generated with stepping code. The rest of the coded symbols are generated with random code and denoted as Part II coded symbols. The numerical results show that the new hybrid code is able to achieve a complete decoding with no extra coded symbol (ε=0) if all the Part I coded symbols are received without loss. However, if only a portion of Part I coded symbols are received, a high probability of complete decoding is still achievable with k+10 coded symbols from the combination of Part I and II. SR code has a decoding complexity of O(k) in the former and O((βk)3) in the latter, where β ∈ R for 0 ≤ β ≤ 1, is the fraction of uncoded symbols that fails to be reconstructed from Part I coded symbols.

  • Steady State Analysis of the RED Gateway: Stability, Transient Behavior, and Parameter Setting

    Hiroyuki OHSAKI  Masayuki MURATA  

     
    PAPER

      Vol:
    E85-B No:1
      Page(s):
    107-115

    Several gateway-based congestion control mechanisms have been proposed to support an end-to-end congestion control mechanism of TCP (Transmission Control Protocol). One of promising gateway-based congestion control mechanisms is a RED (Random Early Detection) gateway. Although effectiveness of the RED gateway is fully dependent on a choice of control parameters, it has not been fully investigated how to configure its control parameters. In this paper, we analyze the steady state behavior of the RED gateway by explicitly modeling the congestion control mechanism of TCP. We first derive the equilibrium values of the TCP window size and the buffer occupancy of the RED gateway. Also derived are the stability condition and the transient performance index of the network using a control theoretic approach. Numerical examples as well as simulation results are presented to clearly show relations between control parameters and the steady state behavior.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.