1-5hit |
Shigeaki HARADA Eiji TAKIMOTO Akira MARUOKA
We consider the problem of dynamically apportioning resources among a set of options in a worst-case online framework. The model we investigate is a generalization of the well studied online learning model. In particular, we allow the learner to see as additional information how high the risk of each option is. This assumption is natural in many applications like horse-race betting, where gamblers know odds for all options before placing bets. We apply Vovk's Aggregating Algorithm to this problem and give a tight performance bound. The results support our intuition that it is safe to bet more on low-risk options. Surprisingly, the loss bound of the algorithm does not depend on the values of relatively small risks.
Akito SUZUKI Ryoichi KAWAHARA Masahiro KOBAYASHI Shigeaki HARADA Yousuke TAKAHASHI Keisuke ISHIBASHI
Network functions virtualization (NFV) enables telecommunications service providers to realize various network services by flexibly combining multiple virtual network functions (VNFs). To provide such services, an NFV control method should optimally allocate such VNFs into physical networks and servers by taking account of the combination(s) of objective functions and constraints for each metric defined for each VNF type, e.g., VNF placements and routes between the VNFs. The NFV control method should also be extendable for adding new metrics or changing the combination of metrics. One approach for NFV control to optimize allocations is to construct an algorithm that simultaneously solves the combined optimization problem. However, this approach is not extendable because the problem needs to be reformulated every time a new metric is added or a combination of metrics is changed. Another approach involves using an extendable network-control architecture that coordinates multiple control algorithms specified for individual metrics. However, to the best of our knowledge, no method has been developed that can optimize allocations through this kind of coordination. In this paper, we propose an extendable NFV-integrated control method by coordinating multiple control algorithms. We also propose an efficient coordination algorithm based on reinforcement learning. Finally, we evaluate the effectiveness of the proposed method through simulations.
Keisuke ISHIBASHI Shigeaki HARADA Ryoichi KAWAHARA
In this paper, we propose a CTRIL (Common Trend and Regression with Independent Loss) model to infer latent traffic demand in overloaded links as well as how much it is reduced due to QoS (Quality of Service) degradation. To appropriately provision link bandwidth for such overloaded links, we need to infer how much traffic will increase without QoS degradation. Because original latent traffic demand cannot be observed, we propose a method that compares the other traffic time series of an underloaded link, and by assuming that the latent traffic demands in both overloaded and underloaded are common, and actualized traffic demand in the overloaded link is decreased from common pattern due to the effect of QoS degradation. To realize the method, we developed a CTRIL model on the basis of a state-space model where observed traffic is generated from a latent trend but is decreased by the QoS degradation. By applying the CTRIL model to actual HTTP (Hypertext transfer protocol) traffic and QoS time series data, we reveal that 1% packet loss decreases traffic demand by 12.3%, and the estimated latent traffic demand is larger than the observed one by 23.0%.
Shigeaki HARADA Keisuke ISHIBASHI Ryoichi KAWAHARA
On the Internet, end hosts and network nodes interdependently work to smoothly transfer traffic. Observed traffic dynamics are the result of those interactions among those entities. To manage Internet traffic to provide satisfactory quality services, such dynamics need to be well understood to predict traffic patterns. In particular, some nodes have a function that sends back-pressure signals to backward nodes to reduce their sending rate and mitigate congestion. Transmission Control Protocol (TCP) congestion control in end-hosts also mitigates traffic deviation to eliminate temporary congestion by reducing the TCP sending rate. How these congestion controls mitigate congestion has been extensively investigated. However, these controls only throttle their sending rate but do not reduce traffic volume. Such congestion control fails if congestion is persistent, e.g., for hours, because unsent traffic demand will infinitely accumulate. However, on the actual Internet, even with persistent congestion, such accumulation does not seem to occur. During congestion, users and/or applications tend to reduce their traffic demand in reaction to quality of service (QoS) degradation to avoid negative service experience. We previously estimated that 2% packet loss results in 23% traffic reduction because of this upper-layer reaction [1]. We view this reduction as an upper-layer congestion-avoidance mechanism and construct a closed-loop model of this mechanism, which we call the Upper-Layer Closed-Loop (ULCL) model. We also show that by using ULCL, we can predict the degree of QoS degradation and traffic reduction as an equilibrium of the feedback loop. We applied our model to traffic and packet-loss ratio time series data gathered in an actual network and demonstrate that it effectively estimates actual traffic and packet-loss ratio.
Noriaki KAMIYAMA Tatsuya MORI Ryoichi KAWAHARA Shigeaki HARADA
We have proposed a method of identifying superspreaders by flow sampling and a method of filtering legitimate hosts from the identified superspreaders using a white list. However, the problem of how to optimally set parameters of φ, the measurement period length, m*, the identification threshold of the flow count m within φ, and H*, the identification probability for hosts with m=m*, remained unsolved. These three parameters seriously impact the ability to identify the spread of infection. Our contributions in this work are two-fold: (1) we propose a method of optimally designing these three parameters to satisfy the condition that the ratio of the number of active worm-infected hosts divided by the number of all vulnerable hosts is bound by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine, and (2) the proposed method can optimize the identification accuracy of worm-infected hosts by maximally using a limited amount of memory resource of monitors.