Hao ZHANG Mengtian RONG Tao LIU
In this letter, we propose a new intra-field deinterlacing algorithm based on an edge dependent weighted filter (EDWF). The proposed algorithm consists of three steps: 1) calculating the gradients of three directions (45°, 90°, and 135°) in the local working window; 2) achieving the weights of the neighboring pixels by exploiting the edge information in the pixel gradients; 3) interpolating the missing pixel using the proposed EDWF interpolator. Compared with existing deinterlacing methods on different images and video sequences, the proposed algorithm improves the peak signal-to-noise-ratio (PSNR) while achieving better subjective quality.
Sumaru NIIDA Satoshi UEMURA Shigehiro ANO
With the rapid growth of high performance ICT (Information Communication Technologies) devices such as smart phones and tablet PCs, multitasking has become one of the popular ways of using mobile devices. The reasons users have adopted multitask operation are that it reduces the level of dissatisfaction regarding waiting time and makes effective use of time by switching their attention from the waiting process to other content. This is a good solution to the problem of waiting; however, it may cause another problem, which is the increase in traffic volume due to the multiple applications being worked on simultaneously. Thus, an effective method to control throughput adapted to the multitasking situation is required. This paper proposes a transmission rate control method for web browsing that takes multitasking behavior into account and quantitatively demonstrates the effect of service by two different field experiments. The main contribution of this paper is to present a service design process for a new transmission rate control that takes into account human-network interaction based on the human-centered approach. We show that the degree of satisfaction in relation to waiting time did not degrade even when a field trial using a testbed showed that throughput of the background task was reduced by 40%.
This paper analyzes the correlation between various acoustic features and perceptual voice quality similarity, and proposes a perceptually similar speaker selection technique based on distance metric learning. To analyze the relationship between acoustic features and voice quality similarity, we first conduct a large-scale subjective experiment using the voices of 62 female speakers and perceptual voice quality similarity scores between all pairs of speakers are acquired. Next, multiple linear regression analysis is carried out; it shows that four acoustic features are highly correlated to voice quality similarity. The proposed speaker selection technique first trains a transform matrix based on distance metric learning using the perceptual voice quality similarity acquired in the subjective experiment. Given an input speech, acoustic features of the input speech are transformed using the trained transform matrix, after which speaker selection is performed based on the Euclidean distance on the transformed acoustic feature space. We perform speaker selection experiments and evaluate the performance of the proposed technique by comparing it to speaker selection without feature space transformation. The results indicate that transformation based on distance metric learning reduces the error rate by 53.9%.
Gordana GARDASEVIC Soko DIVANOVIC Milutin RADONJIC Igor RADUSINOVIC
Support of incoming traffic differentiation and Quality of Service (QoS) assurance is very important for the development of high performance packet switches, capable of separating traffic flows. In our previous paper, we proposed the implementation of two buffers at each crosspoint of a crossbar fabric that leads to the Dual Crosspoint Queued (DCQ) switch. Inside DCQ switch, one buffer is used to store the real-time traffic and the other for the non-real-time traffic. We also showed that the static priority algorithms can provide the QoS only for the real-time traffic due to their greedy nature that gives the absolute priority to that type of traffic. In order to overcome this problem, in our paper we propose the DCQ switch with the Largest Weighted Occupancy First scheduling algorithm that provides the desired QoS support for both traffic flows. Detailed analysis of the simulation results confirms the validity of proposed solution.
Marcus BARKOWSKY Enrico MASALA Glenn VAN WALLENDAEL Kjell BRUNNSTRÖM Nicolas STAELENS Patrick LE CALLET
The current development of video quality assessment algorithms suffers from the lack of available video sequences for training, verification and validation to determine and enhance the algorithm's application scope. The Joint Effort Group of the Video Quality Experts Group (VQEG-JEG) is currently driving efforts towards the creation of large scale, reproducible, and easy to use databases. These databases will contain bitstreams of recent video encoders (H.264, H.265), packet loss impairment patterns and impaired bitstreams, pre-parsed bitstream information into files in XML syntax, and well-known objective video quality measurement outputs. The database is continuously updated and enlarged using reproducible processing chains. Currently, more than 70,000 sequences are available for statistical analysis of video quality measurement algorithms. New research questions are posed as the database is designed to verify and validate models on a very large scale, testing and validating various scopes of applications, while subjective assessment has to be limited to a comparably small subset of the database. Special focus is given on the principles guiding the database development, and some results are given to illustrate the practical usefulness of such a database with respect to the detailed new research questions.
Sumiko MIYATA Katsunori YAMAOKA Hirotsugu KINOSHITA
We have proposed a novel call admission control (CAC) method for maximizing total user satisfaction in a heterogeneous traffic network and showed their effectiveness by using the optimal threshold from numerical analysis [1],[2]. With these CAC methods, it is assumed that only selfish users exist in a network. However, we need to consider the possibility that some cooperative users exist who would agree to reduce their requested bandwidth to improve another user's Quality of Service (QoS). Under this assumption, conventional CAC may not be optimal. If there are cooperative users in the network, we need control methods that encourage such user cooperation. However, such “encourage” control methods have not yet been proposed. Therefore, in this paper, we propose novel CAC methods for cooperative users by using queueing theory. Numerical analyses show their effectiveness. We also analyze the characteristics of the optimal control parameter of the threshold.
Qian HU Muqing WU Hailong HAN Ning WANG Chaoyi ZHANG
As a promising future network architecture, Information-centric networking (ICN) has attracted much attention, its ubiquitous in-network caching is one of the key technologies to optimize the dissemination of information. However, considering the diversity of contents and the limitation of cache resources in the Internet, it is usually difficult to find a one-fit-all caching strategy. How to manage the ubiquitous in-network cache in ICN has become an important problem. In this paper, we explore ways to improve cache performance from the three perspectives of spatiality, temporality and availability, based on which we further propose an in-network cache management strategy to support differentiated service. We divide contents requested in the network into different levels and the selection of caching strategies depends on the content level. Furthermore, the corresponding models of utilizing cache resources in spatiality, temporality and availability are also derived for comparison and analysis. Simulation verifies that our differentiated service based cache management strategy can optimize the utilization of cache resources and get higher overall cache performance.
Yun SHEN Yitong LIU Jing LIU Hongwen YANG Dacheng YANG
In this paper, we design an Unequal Error Protection (UEP) rateless code with special coding graph and apply it to propose a novel HTTP adaptive streaming based on UEP rateless code (HASUR). Our designed UEP rateless code provides high diversity on decoding probability and priority for data in different important level with overhead smaller than 0.27. By adopting this UEP rateless channel coding and scalable video source coding, our HASUR ensures symbols with basic quality to be decoded first to guarantee fluent playback experience. Besides, it also provides multiple layers to ensure the most suitable quality for fluctuant bandwidth and packet loss rate (PLR) without estimating them in advance. We evaluate our HASUR against the alternative solutions. Simulation results show that HASUR provides higher video quality and more adapts to bandwidth and PLR than other two commercial schemes under End-to-End transmission.
Integrating the visual attention (VA) model into an objective image quality metric is a rapidly evolving area in modern image quality assessment (IQA) research due to the significant opportunities the VA information presents. So far, in the literature, it has been suggested to use either a task-free saliency map or a quality-task one for the integration into quality metric. A hybrid integration approach which takes the advantages of both saliency maps is presented in this paper. We compare our hybrid integration scheme with existing integration schemes using simple quality metrics. Results show that the proposed method performs better than the previous techniques in terms of prediction accuracy.
Mianxiong DONG Takashi KIMATA Komei SUGIURA Koji ZETTSU
Mobile social networks (MSN) provides diverse services to meet the needs of mobile users, i.e., discovering new friends, and sharing their pictures, videos and other information among their common interest friends. On the other hand, Quality-of-Experience (QoE) is a new concept related to but differs from Quality-of-Service (QoS) perception. QoE is a subjective measure of a customer's experiences with a service focuses on the entire service experience, and is a more holistic evaluation. So far, QoS issues have been focused and mainly addressed in the literature of MSNs. To the best of our knowledge, this paper is the first article to address QoE issues in emerging MSNs. In this paper, we first present a comprehensive investigation on recent advances in MSNs as well as QoE issues addressed in various types of applications and networks. From the lessons learned from the literature, then we propose a future research direction of QoE in MSNs.
Yen-Wen CHEN Meng-Hsien LIN Yung-Ta SU
To lengthen the operational time of mobile devices, power must be managed effectively. To achieve this objective, a Discontinuous Reception (DRX) mechanism is proposed for use in the long-term evolution (LTE) network to enable user equipment (UE) to consume power efficiently. The DRX mechanism provides parameters related to base stations such as evolved Node B (eNB) to configure and manage the transition of UEs between idle (sleep) and active states. Although these parameters can be adjusted dynamically in cooperation with the traffic scheduler, a high signaling overhead and processing load might be introduced in practical deployment if the parameters are adjusted too frequently. In this study, to examine power-saving efficiency, distinct traffic types were scheduled that were constrained by various quality of service (QoS) factors without dynamically changing the DRX parameters. The concept of burst-based scheduling is proposed, based on considering the state transitions and channel conditions of each UE, to increase power-saving efficiency while concurrently satisfying the desired QoS. Both Hypertext Transfer Protocol (HTTP) and video-stream traffic models were exhaustively simulated to examine the performance of the proposed scheme and numerous scheduling alternatives were tested to compare the proposed scheme with other schemes. The simulation results indicate that video-streaming traffic is more sensitive to the scheduling schemes than HTTP traffic. The simulation results were further analyzed in terms of traffic scheduling and parameter adjustment and the analysis results can help design future studies on power management in the LTE network.
Hang ZHANG Yong DING Peng Wei WU Xue Tong BAI Kai HUANG
Visual quality evaluation is crucially important for various video and image processing systems. Traditionally, subjective image quality assessment (IQA) given by the judgments of people can be perfectly consistent with human visual system (HVS). However, subjective IQA metrics are cumbersome and easily affected by experimental environment. These problems further limits its applications of evaluating massive pictures. Therefore, objective IQA metrics are desired which can be incorporated into machines and automatically evaluate image quality. Effective objective IQA methods should predict accurate quality in accord with the subjective evaluation. Motivated by observations that HVS is highly adapted to extract irregularity information of textures in a scene, we introduce multifractal formalism into an image quality assessment scheme in this paper. Based on multifractal analysis, statistical complexity features of nature images are extracted robustly. Then a novel framework for image quality assessment is further proposed by quantifying the discrepancies between multifractal spectrums of images. A total of 982 images are used to validate the proposed algorithm, including five type of distortions: JPEG2000 compression, JPEG compression, white noise, Gaussian blur, and Fast Fading. Experimental results demonstrate that the proposed metric is highly effective for evaluating perceived image quality and it outperforms many state-of-the-art methods.
With the phenomenal explosion in online services, social networks are becoming an emerging ubiquitous platform for numerous services where service consumers require the selection of trustworthy service providers before invoking services with the help of other intermediate participants. Under this circumstance, evaluation of the trustworthiness of the service provider along the social trust paths from the service consumer to the service provider is required and to this end, selection of the optimal social trust path (OSTP) that can yield the most trustworthy evaluation result is a pre-requisite. OSTP selection with multiple quality of trust (QoT) constraints has been proven to be NP-Complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this problem. However, existing solutions cannot guarantee the search efficiency, that is, they have difficulty in avoiding suboptimal solutions during the search process. Quantum annealing uses delocalization and tunneling to avoid local minima without sacrificing execution time. Several recent studies have proven that it is a promising way to tackle many optimization problems. In this paper, we propose a novel quantum annealing based OSTP selection algorithm (QA_OSTP) for large-scale complex social networks. Experiments show that QA_OSTP has better performance than its heuristic counterparts.
Masaki KUBO Kensuke NAKANISHI Kentaro YANAGIHARA Shinsuke HARA
The use of cooperative nodes is effective for enhancing the reliability of wireless data transmission between a source and a destination by means of transmit diversity effect. However, in its application to wireless multi-hop networks, how to form cooperative node candidates and how to select multiple cooperative nodes out of them have not been well investigated. In this paper, we propose a multiple cooperative node selection method based on a criterion composed of “quality” and “angle” metrics, which can select and order adequate cooperative nodes. Computer simulation results show that the proposed method can effectively reduce the packet error rate without any knowledge on node location.
Yeunwoong KYUNG Taihyong YIM Taekook KIM Tri M. NGUYEN Jinwoo PARK
This paper proposes a QoS-aware differential processing control (QADPC) scheme for OpenFlow-based mobile networks. QADPC classifies the input packets to the control plane by considering end terminal mobility and service type. Then, different capacities are assigned to each classified packet for prioritized processing. By means of Markov chains, QADPC is evaluated in terms of blocking probability and waiting time in the control plane. Analytical results demonstrate that QADPC offers high priority packets both lower blocking probability and less waiting time.
Four calculation techniques for the Q-factor determination of resonant structures are compared on the basis of the influence of the VNA measurement uncertainty. The influence is evaluated using Monte Carlo calculations. On the basis of the deviation, the dispersion, and the effect of nearby resonances, the circle fitting method is the most appropriate technique. Although the 3dB method is the most popular technique, the Q-factors calculated by this method exhibit deviations, and the sign and amount of the deviation depend on the measurement setup. Comparisons using measurement data demonstrate that the uncertainty of the dielectric loss tangent calculated by the circle fitting method is less than a third of those calculated by the other three techniques.
A new scheme based on multi-order visual comparison is proposed for full-reference image quality assessment. Inspired by the observation that various image derivatives have great but different effects on visual perception, we perform respective comparison on different orders of image derivatives. To obtain an overall image quality score, we adaptively integrate the results of different comparisons via a perception-inspired strategy. Experimental results on public databases demonstrate that the proposed method is more competitive than some state-of-the-art methods, benchmarked against subjective assessment given by human beings.
Estimation of distribution algorithms (EDAs), since they were introduced, have been successfully used to solve discrete optimization problems and hence proven to be an effective methodology for discrete optimization. To enhance the applicability of EDAs, researchers started to integrate EDAs with discretization methods such that the EDAs designed for discrete variables can be made capable of solving continuous optimization problems. In order to further our understandings of the collaboration between EDAs and discretization methods, in this paper, we propose a quality measure of discretization methods for EDAs. We then utilize the proposed quality measure to analyze three discretization methods: fixed-width histogram (FWH), fixed-height histogram (FHH), and greedy random split (GRS). Analytical measurements are obtained for FHH and FWH, and sampling measurements are conducted for FHH, FWH, and GRS. Furthermore, we integrate Bayesian optimization algorithm (BOA), a representative EDA, with the three discretization methods to conduct experiments and to observe the performance difference. A good agreement is reached between the discretization quality measurements and the numerical optimization results. The empirical results show that the proposed quality measure can be considered as an indicator of the suitability for a discretization method to work with EDAs.
Leida LI Hancheng ZHU Jiansheng QIAN Jeng-Shyang PAN
This letter presents a no-reference blocking artifact measure based on analysis of color discontinuities in YUV color space. Color shift and color disappearance are first analyzed in JPEG images. For color-shifting and color-disappearing areas, the blocking artifact scores are obtained by computing the gradient differences across the block boundaries in U component and Y component, respectively. An overall quality score is then produced as the average of the local ones. Extensive simulations and comparisons demonstrate the efficiency of the proposed method.
Donghui LIN Toru ISHIDA Yohei MURAKAMI Masahiro TANAKA
The availability of more and more Web services provides great varieties for users to design service processes. However, there are situations that services or service processes cannot meet users' requirements in functional QoS dimensions (e.g., translation quality in a machine translation service). In those cases, composing Web services and human tasks is expected to be a possible alternative solution. However, analysis of such practical efforts were rarely reported in previous researches, most of which focus on the technology of embedding human tasks in software environments. Therefore, this study aims at analyzing the effects of composing Web services and human activities using a case study in the domain of language service with large scale experiments. From the experiments and analysis, we find out that (1) service implementation variety can be greatly increased by composing Web services and human activities for satisfying users' QoS requirements; (2) functional QoS of a Web service can be significantly improved by inducing human activities with limited cost and execution time provided certain quality of human activities; and (3) multiple QoS attributes of a composite service are affected in different ways with different quality of human activities.