Yunjie GU Yuehang DING Yuxiang HU
A Service Function Chain (SFC) is an ordered sequence of virtual network functions (VNFs) to provide network service. Most existing SFC orchestration schemes, however, cannot optimize the resources allocation while guaranteeing the service delay constraint. To fulfill this goal, we propose a Layered Graph based SFC Orchestration Scheme (LGOS). LGOS converts both the cost of resource and the related delay into the link weights in the layered graph, which helps abstract the SFC orchestration problem as a shortest path problem. Then a simulated annealing based batch processing algorithm is designed for SFC requests set. Through extensive evaluations, we demonstrated that our scheme can reduce the end-to-end delay and the operational expenditure by 21.6% and 13.7% at least, and the acceptance ratio of requests set can be improved by 22.3%, compared with other algorithms.
JianNan ZHANG JiJun ZHOU JianFeng WU ShengYing YANG
Convolutional neural networks (CNNS) have a strong ability to understand and judge images. However, the enormous parameters and computation of CNNS have limited its application in resource-limited devices. In this letter, we used the idea of parameter sharing and dense connection to compress the parameters in the convolution kernel channel direction, thus greatly reducing the number of model parameters. On this basis, we designed Shared and Dense Channel-wise Convolutional Networks (SDChannelNets), mainly composed of Depth-wise Separable SD-Channel-wise Convolution layer. The advantage of SDChannelNets is that the number of model parameters is greatly reduced without or with little loss of accuracy. We also introduced a hyperparameter that can effectively balance the number of parameters and the accuracy of a model. We evaluated the model proposed by us through two popular image recognition tasks (CIFAR-10 and CIFAR-100). The results showed that SDChannelNets had similar accuracy to other CNNs, but the number of parameters was greatly reduced.
This letter presents ternary convolutional codes and their punctured codes with optimum distance spectrum.
Harunobu DAIKOKU Hideyuki KAWASHIMA Osamu TATEBE
This paper proposes and examines the three in-memory shuffling methods designed to address problems in MapReduce shuffling caused by skewed data. Coupled Shuffle Architecture (CSA) employs a single pairwise all-to-all exchange to shuffle both blocks, units of shuffle transfer, and meta-blocks, which contain the metadata of corresponding blocks. Decoupled Shuffle Architecture (DSA) separates the shuffling of meta-blocks and blocks, and applies different all-to-all exchange algorithms to each shuffling process, attempting to mitigate the impact of stragglers in strongly skewed distributions. Decoupled Shuffle Architecture with Skew-Aware Meta-Shuffle (DSA w/ SMS) autonomously determines the proper placement of blocks based on the memory consumption of each worker process. This approach targets extremely skewed situations where some worker processes could exceed their node memory limitation. This study evaluates implementations of the three shuffling methods in our prototype in-memory MapReduce engine, which employs high performance interconnects such as InfiniBand and Intel Omni-Path. Our results suggest that DSA w/ SMS is the only viable solution for extremely skewed data distributions. We also present a detailed investigation of the performance of CSA and DSA in various skew situations.
Toyotaro TOKIMOTO Shintaro TOKIMOTO Kengo FUJII Shogo MORITA Hirotsugu YAMAMOTO
We propose a method to realize a subjective super-resolution on a high-speed LED display, which dynamically shows a set of four neighboring pixels on every LED pixel. We have experimentally confirmed the subjective super-resolution effect. This paper proposes a subjective super-resolution hypothesis in human visual system and reports simulation results with pseudo fixation eye movements.
Xiaole LI Hua WANG Shanwen YI Linbo ZHAI
The periodic disaster backup activity among geographically distributed multiple datacenters consumes huge network resources and therefore imposes a heavy burden on datacenters and transmission links. Previous work aims at least completion time, maximum utility or minimal cost, without consideration of load balance for limited network resources, likely to result in unfair distribution of backup load or significant impact on daily network services. In this paper, we propose a new progressive forwarding disaster backup strategy in the Software Defined Network scenarios to mitigate forwarding burdens on source datacenters and balance backup loads on backup datacenters and transmission links. We construct a new redundancy-aware time-expanded network model to divide time slots according to redundancy requirement, and propose role-switching method over time to utilize forwarding capability of backup datacenters. In every time slot, we leverage two-step optimization algorithm to realize capacity-constrained backup datacenter selection and fair backup load distribution. Simulations results prove that our strategy achieves good performance in load balance under the condition of guaranteeing transmission completion and backup redundancy.
Xuan WANG Bofeng ZHANG Mingqing HUANG Furong CHANG Zhuocheng ZHOU
When individuals make a purchase from online sources, they may lack first-hand knowledge of the product. In such cases, they will judge the quality of the item by the reviews other consumers have posted. Therefore, it is significant to determine whether comments about a product are credible. Most often, conventional research on comment credibility has employed supervised machine learning methods, which have the disadvantage of needing large quantities of training data. This paper proposes an unsupervised method for judging comment credibility based on the Biterm Sentiment Latent Dirichlet Allocation (BS-LDA) model. Using this approach, first we derived some distributions and calculated each comment's credibility score via them. A comment's credibility was judged based on whether it achieved a threshold score. Our experimental results using comments from Amazon.com demonstrated that the overall performance of our approach can play an important role in determining the credibility of comments in some situation.
In conventional video streaming systems, various kind of video streams are delivered from a dedicated server (e.g., edge server) to the subscribers so that a video stream of higher quality level is encoded with a higher bitrate. In this paper, we consider the problem of delivering those video streams with the assistance of Peer-to-Peer (P2P) technology with as small server cost as possible while keeping the performance of video streaming in terms of the throughput and the latency. The basic idea of the proposed method is to divide a given video stream into several sub-streams called stripes as evenly as possible and to deliver those stripes to the subscribers through different tree-structured overlays. Such a stripe-based approach could average the load of peers, and could effectively resolve the overloading of the overlay for high quality video streams. The performance of the proposed method is evaluated numerically. The result of evaluations indicates that the proposed method significantly reduces the server cost necessary to guarantee a designated delivery hops, compared with a naive tree-based scheme.
Zuopeng ZHAO Hongda ZHANG Yi LIU Nana ZHOU Han ZHENG Shanyi SUN Xiaoman LI Sili XIA
Although correlation filter-based trackers have demonstrated excellent performance for visual object tracking, there remain several challenges to be addressed. In this work, we propose a novel tracker based on the correlation filter framework. Traditional trackers face difficulty in accurately adapting to changes in the scale of the target when the target moves quickly. To address this, we suggest a scale adaptive scheme based on prediction scales. We also incorporate a speed-based adaptive model update method to further improve overall tracking performance. Experiments with samples from the OTB100 and KITTI datasets demonstrate that our method outperforms existing state-of-the-art tracking algorithms in fast motion scenes.
In this letter, we investigate the separating redundancy of binary linear codes. Using analytical techniques, we provide a general lower bound on the first separating redundancy of binary linear codes and show the bound is tight for a particular family of binary linear codes, i.e., cycle codes. In other words, the first separating redundancy of cycle codes can be determined. We also derive a deterministic and constructive upper bound on the second separating redundancy of cycle codes, which is shown to be better than the general deterministic and constructive upper bounds for the codes.
A triangle enumerating problem is one of fundamental problems of graph data. Although several triangle enumerating algorithms based on MapReduce have been proposed, they still suffer from generating a lot of intermediate data. In this paper, we propose the efficient MapReduce algorithms to enumerate every triangle in the massive graph based on a vertex partition. Since a triangle is composed of an edge and a wedge, our algorithms check the existence of an edge connecting the end-nodes of each wedge. To generate every triangle from a graph in parallel, we first split a graph into several vertex partitions and group the edges and wedges in the graph for each pair of vertex partitions. Then, we form the triangles appearing in each group. Furthermore, to enhance the performance of our algorithm, we remove the duplicated wedges existing in several groups. Our experimental evaluation shows the performance of our proposed algorithm is better than that of the state-of-the-art algorithm in diverse environments.
Yiheng JIAN Xiao YU Zhou XU Ziyi MA
Fault prediction aims to identify whether a software module is defect-prone or not according to metrics that are mined from software projects. These metric values, also known as features, may involve irrelevance and redundancy, which hurt the performance of fault prediction models. In order to filter out irrelevant and redundant features, a Hybrid Feature Selection (abbreviated as HFS) method for software fault prediction is proposed. The proposed HFS method consists of two major stages. First, HFS groups features with hierarchical agglomerative clustering; second, HFS selects the most valuable features from each cluster to remove irrelevant and redundant ones based on two wrapper based strategies. The empirical evaluation was conducted on 11 widely-studied NASA projects, using three different classifiers with four performance metrics (precision, recall, F-measure, and AUC). Comparison with six filter-based feature selection methods demonstrates that HFS achieves higher average F-measure and AUC values. Compared with two classic wrapper feature selection methods, HFS can obtain a competitive prediction performance in terms of average AUC while significantly reducing the computation cost of the wrapper process.
Nuttapong ATTRAPADUNG Goichiro HANAOKA Shinsaku KIYOMOTO Tomoaki MIMOTO Jacob C. N. SCHULDT
Secure two-party comparison plays a crucial role in many privacy-preserving applications, such as privacy-preserving data mining and machine learning. In particular, the available comparison protocols with the appropriate input/output configuration have a significant impact on the performance of these applications. In this paper, we firstly describe a taxonomy of secure two-party comparison protocols which allows us to describe the different configurations used for these protocols in a systematic manner. This taxonomy leads to a total of 216 types of comparison protocols. We then describe conversions among these types. While these conversions are based on known techniques and have explicitly or implicitly been considered previously, we show that a combination of these conversion techniques can be used to convert a perhaps less-known two-party comparison protocol by Nergiz et al. (IEEE SocialCom 2010) into a very efficient protocol in a configuration where the two parties hold shares of the values being compared, and obtain a share of the comparison result. This setting is often used in multi-party computation protocols, and hence in many privacy-preserving applications as well. We furthermore implement the protocol and measure its performance. Our measurement suggests that the protocol outperforms the previously proposed protocols for this input/output configuration, when off-line pre-computation is not permitted.
Chaima DHAHRI Kazunori MATSUMOTO Keiichiro HOASHI
Upcoming mood prediction plays an important role in different topics such as bipolar depression disorder in psychology and quality-of-life and recommendations on health-related quality of life research. The mood in this study is defined as the general emotional state of a user. In contrast to emotions which is more specific and varying within a day, the mood is described as having either a positive or negative valence[1]. We propose an autonomous system that predicts the upcoming user mood based on their online activities over cyber, social and physical spaces without using extra-devices and sensors. Recently, many researchers have relied on online social networks (OSNs) to detect user mood. However, all the existing works focused on inferring the current mood and only few works have focused on predicting the upcoming mood. For this reason, we define a new goal of predicting the upcoming mood. We, first, collected ground truth data during two months from 383 subjects. Then, we studied the correlation between extracted features and user's mood. Finally, we used these features to train two predictive systems: generalized and personalized. The results suggest a statistically significant correlation between tomorrow's mood and today's activities on OSNs, which can be used to develop a decent predictive system with an average accuracy of 70% and a recall of 75% for the correlated users. This performance was increased to an average accuracy of 79% and a recall of 80% for active users who have more than 30 days of history data. Moreover, we showed that, for non-active users, referring to a generalized system can be a solution to compensate the lack of data at the early stage of the system, but when enough data for each user is available, a personalized system is used to individually predict the upcoming mood.
Maya OKAWA Yusuke TANAKA Takeshi KURASHIMA Hiroyuki TODA Tomohiro YAMADA
With the acceptance of social sharing, public bike sharing services have become popular worldwide. One of the most important tasks in operating a bike sharing system is managing the bike supply at each station to avoid either running out of bicycles or docks to park them. This requires the system operator to redistribute bicycles from overcrowded stations to under-supplied ones. Trip demand prediction plays a crucial role in improving redistribution strategies. Predicting trip demand is a highly challenging problem because it is influenced by multiple levels of factors, both environmental and individual, e.g., weather and user characteristics. Although several existing studies successfully address either of them in isolation, no framework exists that can consider all factors simultaneously. This paper starts by analyzing trip data from real-world bike-sharing systems. The analysis reveals the interplay of the multiple levels of the factors. Based on the analysis results, we develop a novel form of the point process; it jointly incorporates multiple levels of factors to predict trip demand, i.e., predicting the pick-up and drop-off levels in the future and when over-demand is likely to occur. Our extensive experiments on real-world bike sharing systems demonstrate the superiority of our trip demand prediction method over five existing methods.
This paper considers Peer-to-Peer (P2P) video streaming systems, in which a given video stream is divided into b stripes and those stripes are delivered to n peers through b spanning trees under the constraint such that each peer including the source can forward at most b stripes. The delivery of a stripe to n peers is said to be a k-hop delivery if all peers receive the stripe through a path of length at most k. Let Bk=∑i=0k-1bi. It is known that under the above constraint, k-hop delivery of b stripes to n peers is possible only if n≤Bk. This paper proves that (k+1)-hop delivery of b stripes to n peers is possible for any n≤Bk; namely, we can realize the delivery of stripes with a guaranteed latency while it is slightly larger than the minimum latency. In addition, we derive a necessary and sufficient condition on n to enable a k-hop delivery of b stripes for Bk-b+2≤n≤Bk-1; namely for n's close to Bk.
Tingxiao YANG Yuichiro YOSHIMURA Akira MORITA Takao NAMIKI Toshiya NAKAGUCHI
In this paper, we propose a Pyramid Predictive Attention Network (PPAN) for medical image segmentation. In the medical field, the size of dataset generally restricts the performance of deep CNN and deploying the trained network with gross parameters into the terminal device with limited memory is an expectation. Our team aims to the future home medical diagnosis and search for lightweight medical image segmentation network. Therefore, we designed PPAN mainly made of Xception blocks which are modified from DeepLab v3+ and consist of separable depthwise convolutions to speed up the computation and reduce the parameters. Meanwhile, by utilizing pyramid predictions from each dimension stage will guide the network more accessible to optimize the training process towards the final segmentation target without degrading the performance. IoU metric is used for the evaluation on the test dataset. We compared our designed network performance with the current state of the art segmentation networks on our RGB tongue dataset which was captured by the developed TIAS system for tongue diagnosis. Our designed network reduced 80 percentage parameters compared to the most widely used U-Net in medical image segmentation and achieved similar or better performance. Any terminal with limited storage which is needed a segment of RGB image can refer to our designed PPAN.
A key-order preserving structured overlay network is a class of structured overlay network that preserves, in its structure, the order of keys to support efficient range queries. This paper presents a novel key-order preserving structured overlay network “Suzaku”. Similar to the conventional Chord#, Suzaku uses a periodically updated finger table as a routing table, but extends its uni-directional finger table to bi-directional, which achieves ⌈log2 n⌉-1 maximum lookup hops in the converged state. Suzaku introduces active and passive bi-directional finger table update algorithms for node insertion and deletion. This method maintains good lookup performance (lookup hops increase nearly logarithmically against n) even in churn situations. As well as its good performance, the algorithms of Suzaku are simple and easy to implement. This paper describes the principles of Suzaku, followed by simulation evaluations, in which it showed better performance than the conventional networks, Chord# and Skip Graph.
Yuto KITAGAWA Tasuku ISHIGOOKA Takuya AZUMI
This paper proposes an anomaly prediction method based on k-means clustering that assumes embedded devices with memory constraints. With this method, by checking control system behavior in detail using k-means clustering, it is possible to predict anomalies. However, continuing clustering is difficult because data accumulate in memory similar to existing k-means clustering method, which is problematic for embedded devices with low memory capacity. Therefore, we also propose k-means clustering to continue clustering for infinite stream data. The proposed k-means clustering method is based on online k-means clustering of sequential processing. The proposed k-means clustering method only stores data required for anomaly prediction and releases other data from memory. Due to these characteristics, the proposed k-means clustering realizes that anomaly prediction is performed by reducing memory consumption. Experiments were performed with actual data of control system for anomaly prediction. Experimental results show that the proposed anomaly prediction method can predict anomaly, and the proposed k-means clustering can predict anomalies similar to standard k-means clustering while reducing memory consumption. Moreover, the proposed k-means clustering demonstrates better results of anomaly prediction than existing online k-means clustering.
Eiji OKAMOTO Manabu MIKAMI Hitoshi YOSHINO
In fifth-generation mobile communications systems (5G), grant-free non-orthogonal multiple access (NOMA) schemes have been considered as a way to accommodate the many wireless connections required for Internet of Things (IoT) devices. In NOMA schemes, both system capacity enhancement and transmission protocol simplification are achieved, and an overload test of more than one hundred percent of the transmission samples over conducted. Multi-user shared multiple access (MUSA) has been proposed as a representative scheme for NOMA. However, the performance of MUSA has not been fully analyzed nor compared to other NOMA or orthogonal multiple access schemes. Therefore, in this study, we theoretically and numerically analyze the performance of MUSA in uplink fading environments and compare it with orthogonal frequency division multiple access (OFDMA), space division multiple access-based OFDMA, low-density signature, and sparse code multiple access. The characteristics and superiority of MUSA are then clarified.