Wataru KAWAKAMI Kenji KANAI Bo WEI Jiro KATTO
To recognize transportation modes without any additional sensor devices, we demonstrate that the transportation modes can be recognized from communication quality factors. In the demonstration, instead of using global positioning system (GPS) and accelerometer sensors, we collect mobile TCP throughputs, received-signal strength indicators (RSSIs), and cellular base-station IDs (Cell IDs) through in-line network measurement when the user enjoys mobile services, such as video streaming. In accuracy evaluations, we conduct two different field experiments to collect the data in six typical transportation modes (static, walking, riding a bicycle, riding a bus, riding a train and riding a subway), and then construct the classifiers by applying a support-vector machine (SVM), k-nearest neighbor (k-NN), random forest (RF), and convolutional neural network (CNN). Our results show that these transportation modes can be recognized with high accuracy by using communication quality factors as well as the use of accelerometer sensors.
Amin JAMALI Seyed Mostafa SAFAVI HEMAMI Mehdi BERENJKOUB Hossein SAIDI Masih ABEDINI
Device-to-device (D2D) communication in cellular networks is defined as direct communication between two mobile users without traversing the base station (BS) or core network. D2D communication can occur on the cellular frequencies (i.e., inband) or unlicensed spectrum (i.e., outband). A high capacity IEEE 802.11-based outband device-to-device communication system for cellular networks is introduced in this paper. Transmissions in device-to-device connections are managed using our proposed medium access control (MAC) protocol. In the proposed MAC protocol, backoff window size is adjusted dynamically considering the current network status and utilizing an appropriate transmission attempt rate. We have considered both cases that the request to send/clear to send (RTS/CTS) mechanism is and is not used in our protocol design. Describing mechanisms for guaranteeing quality of service (QoS) and enhancing reliability of the system is another part of our work. Moreover, performance of the system in the presence of channel impairments is investigated analytically and through simulations. Analytical and simulation results demonstrate that our proposed system has high throughput, and it can provide different levels of QoS for its users.
In many applications, tables are distributively stored in different data sources, but the frequency of updates on each data source is different. Some techniques have been proposed to effectively express the temporal orders between different values, and the most current, i.e. up-to-date, value of a given data item can be easily picked up according to the temporal orders. However, the currency of the data items in the same table may be different. That is, when a user asks for a table D, it cannot be ensured that all the most current values of the data items in D are stored in a single table. Since different data sources may have overlaps, we can construct a conjunctive query on multiple tables to get all the required current values. In this paper, we formalize the conjunctive query as currency preserving query, and study how to generate the minimized currency preserving query to reduce the cost of visiting different data sources. First, a graph model is proposed to represent the distributed tables and their relationships. Based on the model, we prove that a currency preserving query is equivalent to a terminal tree in the graph, and give an algorithm to generate a query from a terminal tree. After that, we study the problem of finding minimized currency preserving query. The problem is proved to be NP-hard, and some heuristics strategies are provided to solve the problem. Finally, we conduct experiments on both synthetic and real data sets to verify the effectiveness and efficiency of the proposed techniques.
Takahiro OGAWA Sho TAKAHASHI Naofumi WADA Akira TANAKA Miki HASEYAMA
Binary sparse representation based on arbitrary quality metrics and its applications are presented in this paper. The novelties of the proposed method are twofold. First, the proposed method newly derives sparse representation for which representation coefficients are binary values, and this enables selection of arbitrary image quality metrics. This new sparse representation can generate quality metric-independent subspaces with simplification of the calculation procedures. Second, visual saliency is used in the proposed method for pooling the quality values obtained for all of the parts within target images. This approach enables visually pleasant approximation of the target images more successfully. By introducing the above two novel approaches, successful image approximation considering human perception becomes feasible. Since the proposed method can provide lower-dimensional subspaces that are obtained by better image quality metrics, realization of several image reconstruction tasks can be expected. Experimental results showed high performance of the proposed method in terms of two image reconstruction tasks, image inpainting and super-resolution.
Yasutaka KAMEI Takahiro MATSUMOTO Kazuhiro YAMASHITA Naoyasu UBAYASHI Takashi IWASAKI Shuichi TAKAYAMA
Nowadays, open source software (OSS) systems are adopted by proprietary software projects. To reduce the risk of using problematic OSS systems (e.g., causing system crashes), it is important for proprietary software projects to assess OSS systems in advance. Therefore, OSS quality assessment models are studied to obtain information regarding the quality of OSS systems. Although the OSS quality assessment models are partially validated using a small number of case studies, to the best of our knowledge, there are few studies that empirically report how industrial projects actually use OSS quality assessment models in their own development process. In this study, we empirically evaluate the cost and effectiveness of OSS quality assessment models at Fujitsu Kyushu Network Technologies Limited (Fujitsu QNET). To conduct the empirical study, we collect datasets from (a) 120 OSS projects that Fujitsu QNET's projects actually used and (b) 10 problematic OSS projects that caused major problems in the projects. We find that (1) it takes average and median times of 51 and 49 minutes, respectively, to gather all assessment metrics per OSS project and (2) there is a possibility that we can filter problematic OSS systems by using the threshold derived from a pool of assessment metrics. Fujitsu QNET's developers agree that our results lead to improvements in Fujitsu QNET's OSS assessment process. We believe that our work significantly contributes to the empirical knowledge about applying OSS assessment techniques to industrial projects.
Hui ZHI Feiyue WANG Ziju HUANG
Effective capacity (EC) is an important performance metric for a time-varying wireless channel in order to evaluate the communication rate in the physical layer (PHL) while satisfying the statistical delay quality of service (QoS) requirement in data-link layer (DLL). This paper analyzes EC of amplify-and-forward wireless relay network with different relay selection (RS) protocols. First, through the analysis of the probability density function (PDF) of received signal-to-noise ratio (SNR), the exact expressions of EC for direct transmission (DT), random relay (RR), random relay with direct transmission (RR-WDT), best relay (BR) protocols are derived. Then a novel best relay with direct transmission (BR-WDT) protocol is proposed to maximize EC and an exact expression of EC for BR-WDT protocol is developed. Simulations demonstrate that the derived analytical results well match those of Monte-Carlo simulations. The proposed BR-WDT protocol can always achieve larger EC than other protocols while guaranteeing the delay QoS requirement. Moreover, the influence of distance between source and relay on EC is discussed, and optimal relay position for different RS protocols is estimated. Furthermore, EC of all protocols becomes smaller while delay QoS exponent becomes larger, and EC of BR-WDT becomes better while the number of relays becomes larger.
Mohan LI Jianzhong LI Siyao CHENG Yanbin SUN
Currency is one of the important measurements of data quality. The main purpose of the study on data currency is to determine whether a given data item is up-to-date. Though there are already several works on determining data currency, all the proposed methods have limitations. Some works require timestamps of data items that are not always available, and others are based on certain currency rules that can only decide relevant currency and cannot express uncertain semantics. To overcome the limitations of the previous methods, this paper introduces a new approach for determining data currency based on uncertain currency rules. First, a class of uncertain currency rules is provided to infer the possible valid time for a given data item, and then based on the rules, data currency is formally defined. After that, a polynomial time algorithm for evaluating data currency is given based on the uncertain currency rules. Using real-life data sets, the effectiveness and efficiency of the proposed method are experimentally verified.
Takashi WATANABE Akito MONDEN Zeynep YÜCEL Yasutaka KAMEI Shuji MORISAKI
Association rule mining discovers relationships among variables in a data set, representing them as rules. These are expected to often have predictive abilities, that is, to be able to predict future events, but commonly used rule interestingness measures, such as support and confidence, do not directly assess their predictive power. This paper proposes a cross-validation -based metric that quantifies the predictive power of such rules for characterizing software defects. The results of evaluation this metric experimentally using four open-source data sets (Mylyn, NetBeans, Apache Ant and jEdit) show that it can improve rule prioritization performance over conventional metrics (support, confidence and odds ratio) by 72.8% for Mylyn, 15.0% for NetBeans, 10.5% for Apache Ant and 0 for jEdit in terms of SumNormPre(100) precision criterion. This suggests that the proposed metric can provide better rule prioritization performance than conventional metrics and can at least provide similar performance even in the worst case.
Maoxi LI Qingyu XIANG Zhiming CHEN Mingwen WANG
The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.
Zhengxue CHENG Masaru TAKEUCHI Kenji KANAI Jiro KATTO
Image quality assessment (IQA) is an inherent problem in the field of image processing. Recently, deep learning-based image quality assessment has attracted increased attention, owing to its high prediction accuracy. In this paper, we propose a fully-blind and fast image quality predictor (FFIQP) using convolutional neural networks including two strategies. First, we propose a distortion clustering strategy based on the distribution function of intermediate-layer results in the convolutional neural network (CNN) to make IQA fully blind. Second, by analyzing the relationship between image saliency information and CNN prediction error, we utilize a pre-saliency map to skip the non-salient patches for IQA acceleration. Experimental results verify that our method can achieve the high accuracy (0.978) with subjective quality scores, outperforming existing IQA methods. Moreover, the proposed method is highly computationally appealing, achieving flexible complexity performance by assigning different thresholds in the saliency map.
An important problem in mathematics and data science, given two or more metric spaces, is obtaining a metric of the product space by aggregating the source metrics using a multivariate function. In 1981, Borsík and Doboš solved the problem, and much progress has subsequently been made in generalizations of the problem. The triangle inequality is a key property for a bivariate function to be a metric. In the metric aggregation, requesting the triangle inequality of the resulting metric imposes the subadditivity on the aggregating function. However, in some applications, such as the image matching, a relaxed notion of the triangle inequality is useful and this relaxation may enlarge the scope of the aggregators to include some natural superadditive functions such as the harmonic mean. This paper examines the aggregation of two semimetrics (i.e. metrics with a relaxed triangle inequality) by the harmonic mean is studied and shows that such aggregation weakly preserves the relaxed triangle inequalities. As an application, the paper presents an alternative simple proof of the relaxed triangle inequality satisfied by the robust Jaccard-Tanimoto set dissimilarity, which was originally shown by Gragera and Suppakitpaisarn in 2016.
Ling ZHENG Zhiliang QIU Weitao PAN Yibo MEI Shiyong SUN Zhiyi ZHANG
High-performance Network Over Coax, or HINOC for short, is a broadband access technology that can achieve bidirectional transmission for high-speed Internet service through a coaxial medium. In HINOC access networks, buffer management scheme can improve the fairness of buffer usage among different output ports and the overall loss performance. To provide different services to multiple priority classes while reducing the overall packet loss rate and ensuring fairness among the output ports, this study proposes a QoS optimization method for access networks. A backpressure-based queue threshold control scheme is used to minimize the weighted average packet loss rate among multiple priorities. A theoretical analysis is performed to examine the performance of the proposed scheme, and optimal system parameters are provided. Software simulation shows that the proposed method can improve the average packet loss rate by about 20% to 40% compared with existing buffer management schemes. Besides, FPGA evaluation reveals that the proposed method can be implemented in practical hardware and performs well in access networks.
Nii L. SOWAH Qingbo WU Fanman MENG Liangzhi TANG Yinan LIU Linfeng XU
In this paper, we improve upon the accuracy of existing tracklet generation methods by repairing tracklets based on their quality evaluation and detection propagation. Starting from object detections, we generate tracklets using three existing methods. Then we perform co-tracklet quality evaluation to score each tracklet and filtered out good tracklet based on their scores. A detection propagation method is designed to transfer the detections in the good tracklets to the bad ones so as to repair bad tracklets. The tracklet quality evaluation in our method is implemented by intra-tracklet detection consistency and inter-tracklet detection completeness. Two propagation methods; global propagation and local propagation are defined to achieve more accurate tracklet propagation. We demonstrate the effectiveness of the proposed method on the MOT 15 dataset
Megumi TAKEZAWA Hirofumi SANADA Takahiro OGAWA Miki HASEYAMA
In this paper, we propose a highly accurate method for estimating the quality of images compressed using fractal image compression. Using an iterated function system, fractal image compression compresses images by exploiting their self-similarity, thereby achieving high levels of performance; however, we cannot always use fractal image compression as a standard compression technique because some compressed images are of low quality. Generally, sufficient time is required for encoding and decoding an image before it can be determined whether the compressed image is of low quality or not. Therefore, in our previous study, we proposed a method to estimate the quality of images compressed using fractal image compression. Our previous method estimated the quality using image features of a given image without actually encoding and decoding the image, thereby providing an estimate rather quickly; however, estimation accuracy was not entirely sufficient. Therefore, in this paper, we extend our previously proposed method for improving estimation accuracy. Our improved method adopts a new image feature, namely lacunarity. Results of simulation showed that the proposed method achieves higher levels of accuracy than those of our previous method.
Guowei LI Qinghai YANG Kyung Sup KWAK
The widespread application of mobile electronic devices has triggered a boom in energy consumption, especially in user equipment (UE). In this paper, we investigate the energy-efficiency (EE) of a UE experiencing the worst channel conditions, which is termed worst-EE. Due to the limited battery of the mobile equipment, worst-EE is a suitable metric for EE fairness optimization in the uplink transmissions of orthogonal frequency division multiple access (OFDMA) networks. More specifically, we determine the optimal power and sub-carrier allocation to maximize the worst-EE with respect to UEs' transmit power, sub-carriers and statistical quality-of-service (QoS). In order to maximize the worst-EE, we formulate a max-min power and sub-carrier allocation problem, which involves nonconvex fractional mixed integer nonlinear programming, i.e., NP-hard to solve. To solve the problem, we first relax the allocation of sub-carriers, formulate the upper bound problem on the original one and prove the quasi-concave property of objective function. With the aid of the Powell-Hestenes-Rockfellar (PHR) approach, we propose a fairness EE sub-carrier and power allocation algorithm. Finally, simulation results demonstrate the advantages of the proposed algorithm.
Kimiko KAWASHIMA Kazuhisa YAMAGISHI Takanori HAYASHI
Many subjective quality assessment methods have been standardized. Experimenters can select a method from these methods in accordance with the aim of the planned subjective assessment experiment. It is often argued that the results of subjective quality assessment are affected by range effects that are caused by the quality distribution of the assessment videos. However, there are no studies on the double stimulus continuous quality-scale (DSCQS) and absolute category rating with hidden reference (ACR-HR) methods that investigate range effects in the high-quality range. Therefore, we conduct experiments using high-quality assessment videos (high-quality experiment) and low-to-high-quality assessment videos (low-to-high-quality experiment) and compare the DSCQS and ACR-HR methods in terms of accuracy, stability, and discrimination ability. Regarding accuracy, we find that the mean opinion scores of the DSCQS and ACR-HR methods were marginally affected by range effects, although almost all common processed video sequences showed no significant difference for the high- and low-to-high-quality experiments. Second, the DSCQS and ACR-HR methods were equally stable in the low-to-high-quality experiment, whereas the DSCQS method was more stable than the ACR-HR method in the high-quality experiment. Finally, the DSCQS method had higher discrimination ability than the ACR-HR method in the low-to-high-quality experiment, whereas both methods had almost the same discrimination ability for the high-quality experiment. We thus determined that the DSCQS method is better at minimizing the range effects than the ACR-HR method in the high-quality range.
The centralized controller of SDN enables a global topology view of the underlying network. It is possible for the SDN controller to achieve globally optimized resource composition and utilization, including optimized end-to-end paths. Currently, resource composition in SDN arena is usually conducted in an imperative manner where composition logics are explicitly specified in high level programming languages. It requires strong programming and OpenFlow backgrounds. This paper proposes declarative path composition, namely Compass, which offers a human-friendly user interface similar to natural language. Borrowing methodologies from Semantic Web, Compass models and stores SDN resources using OWL and RDF, respectively, to foster the virtualized and unified management of the network resources regardless of the concrete controller platform. Besides, path composition is conducted in a declarative manner where the user merely specifies the composition goal in the SPARQL query language instead of explicitly specifying concrete composition details in programming languages. Composed paths are also reused based on similarity matching, to reduce the chance of time-consuming path composition. The experiment results reflect the applicability of Compass in path composition and reuse.
Shumpei YOSHIKAWA Koichi KOBAYASHI Yuh YAMASHITA
Event-triggered control is a method that the control input is updated only when a certain triggering condition is satisfied. In networked control systems, quantization errors via A/D conversion should be considered. In this paper, a new method for quantized event-triggered control with switching triggering conditions is proposed. For a discrete-time linear system, we consider the problem of finding a state-feedback controller such that the closed-loop system is uniformly ultimately bounded in a certain ellipsoid. This problem is reduced to an LMI (Linear Matrix Inequality) optimization problem. The volume of the ellipsoid may be adjusted. The effectiveness of the proposed method is presented by a numerical example.
Huyen T. T. TRAN Cuong T. PHAM Nam PHAM NGOC Anh T. PHAM Truong Cong THANG
360 videos have recently become a popular virtual reality content type. However, a good quality metric for 360 videos is still an open issue. In this work, our goal is to identify appropriate objective quality metrics for 360 video communications. Especially, fourteen objective quality measures at different processing phases are considered. Also, a subjective test is conducted in this study. The relationship between objective quality and subjective quality is investigated. It is found that most of the PSNR-related quality measures are well correlated with subjective quality. However, for evaluating video quality across different contents, a content-based quality metric is needed.
Toshiko TOMINAGA Kanako SATO Noriko YOSHIMURA Masataka MASUDA Hitoshi AOKI Takanori HAYASHI
Web browsing services are expanding as smartphones are becoming increasingly popular worldwide. To provide customers with appropriate quality of web-browsing services, quality design and in-service quality management on the basis of quality of experience (QoE) is important. We propose a web-browsing QoE estimation model. The most important QoE factor for web-browsing is the waiting time for a web page to load. Next, the variation in the communication quality based on a mobile network should be considered. We developed a subjective quality assessment test to clarify QoE characteristics in terms of waiting time using 20 different types of web pages and constructed a web-page QoE estimation model. We then conducted a subjective quality assessment test of web-browsing to clarify the relationship between web-page QoE and web-browsing QoE for three web sites. We obtained the following two QoE characteristics. First, the main factor influencing web-browsing QoE is the average web-page QoE. Second, when web-page QoE variation occurs, a decrease in web-page QoE with a huge amplitude causes the web-browsing QoE to decrease. We used these characteristics in constructing our web-browsing QoE estimation model. The verification test results using non-training data indicate the accuracy of the model. We also show that our findings are applicable to web-browsing quality design and solving management issues on the basis of QoE.