Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ∼ $120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Our research is focused on examining the video quality assessment model based on the MPEG-7 descriptor. Video quality is estimated by using several features based on the predicted frame quality such as average value, worst value, best value, standard deviation, and the predicted frame rate obtained from descriptor information. As a result, assessment of video quality can be conducted with a high prediction accuracy with correlation coefficient=0.94, standard deviation of error=0.24, maximum error=0.68 and outlier ratio=0.23.
We propose a primary traffic based multihop relaying algorithm with cooperative transmission (PTBMR-CT). It enlarges the hop transmission distances to reduce the number of cognitive relays on the route from the cognitive source (CS) to the cognitive destination (CD). In each hop, from the cognitive nodes in a specified area depending on whether the primary source (PS) transmits data to the primary destination (PD), the cognitive node that is farthest away from the cognitive relay that sends data is selected as the other one that receives data. However, when the PS is transmitting data to the PD, from the cognitive nodes in a specified area, another cognitive node is also selected and prepared to be the cognitive relay that receives data of cooperative transmission. Cooperative transmission is performed if the PS is still transmitting data to the PD when the cognitive relay that receives data of the next hop transmission is being searched. Simulation results show that the average number of cognitive relays is reduced by PTBMR-CT compared to conventional primary traffic based farthest neighbor relaying (PTBFNR), and PTBMR-CT outperforms conventional PTBFNR in terms of the average end-to-end reliability, the average end-to-end throughput, the average required transmission power of transmitting data from the CS to the CD, and the average end-to-end transmission latency.
Osamu SUGIMOTO Sei NAITO Yoshinori HATORI
In this paper, we propose a novel method of measuring the perceived picture quality of H.264 coded video based on parametric analysis of the coded bitstream. The parametric analysis means that the proposed method utilizes only bitstream parameters to evaluate video quality, while it does not have any access to the baseband signal (pixel level information) of the decoded video. The proposed method extracts quantiser-scale, macro block type and transform coefficients from each macroblock. These parameters are used to calculate spatiotemporal image features to reflect the perception of coding artifacts which have a strong relation to the subjective quality. A computer simulation shows that the proposed method can estimate the subjective quality at a correlation coefficient of 0.923 whereas the PSNR metric, which is referred to as a benchmark, correlates the subjective quality at a correlation coefficient of 0.793.
This paper presents a no-reference (NR) based video-quality estimation method for compressed videos which apply inter-frame prediction. The proposed method does not need bitstream information. Only pixel information of decoded videos is used for the video-quality estimation. An activity value which indicates a variance of luminance values is calculated for every given-size pixel block. The activity difference between an intra-coded frame and its adjacent frame is calculated and is employed for the video-quality estimation. In addition, a blockiness level and a blur level are also estimated at every frame by analyzing pixel information only. The estimated blockiness level and blur level are also taken into account to improve quality-estimation accuracy in the proposed method. Experimental results show that the proposed method achieves accurate video-quality estimation without the original video which does not include any artifacts by the video compression. The correlation coefficient between subjective video quality and estimated quality is 0.925. The proposed method is suitable for automatic video-quality checks when service providers cannot access the original videos.
Our research is focused on examining a stereoscopic quality assessment model for stereoscopic images with disparate quality in left and right images for glasses-free stereo vision. In this paper, we examine the objective assessment model of 3-D images, considering the difference in image quality between each view-point generated by the disparity-compensated coding. A overall stereoscopic image quality can be estimated by using only predicted values of left and right 2-D image qualities based on the MPEG-7 descriptor information without using any disparity information. As a result, the stereoscopic still image quality is assessed with high prediction accuracy with correlation coefficient=0.98 and average error=0.17.
Toru YAMADA Yoshihiro MIYAMOTO Masahiro SERIZAWA Takao NISHITANI
This paper proposes a video-quality estimation method based on a reduced-reference model for realtime quality monitoring in video streaming services. The proposed method chooses representative-luminance values for individual original-video frames at a server side and transmits those values, along with the pixel-position information of the representative-luminance values in each frame. On the basis of this information, peak signal-to-noise ratio (PSNR) values at client sides can be estimated. This enables realtime monitoring of video-quality degradation by transmission errors. Experimental results show that accurate PSNR estimation can be achieved with additional information at a low bit rate. For SDTV video sequences which are encoded at 1 to 5 Mbps, accurate PSNR estimation (correlation coefficient of 0.92 to 0.95) is achieved with small amount of additional information of 10 to 50 kbps. This enables accurate realtime quality monitoring in video streaming services without average video-quality degradation.
Dong Kwan KIM Won-Tae KIM Seung-Min PARK
In this letter, we apply dynamic software updating to long-lived applications on the DDS middleware while minimizing service interruption and satisfying Quality of Service (QoS) requirements. We dynamically updated applications which run on a commercial DDS implementation to demonstrate the applicability of our approach to dynamic updating. The results show that our update system does not impose an undue performance overhead–all patches could be injected in less than 350 ms and the maximum CPU usage is less than 17%. In addition, the overhead on application throughput due to dynamic updates ranged from 0 to at most 8% and the deadline QoS of the application was satisfied while updating.
Haruhiko KAIYA Atsushi OHNISHI
Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to defining quality requirements more completely and correctly.
Reiko TAKOU Hiroyuki SEGI Tohru TAKAGI Nobumasa SEIYAMA
The frequency regions and spectral features that can be used to measure the perceived similarity and continuity of voice quality are reported here. A perceptual evaluation test was conducted to assess the naturalness of spoken sentences in which either a vowel or a long vowel of the original speaker was replaced by that of another. Correlation analysis between the evaluation score and the spectral feature distance was conducted to select the spectral features that were expected to be effective in measuring the voice quality and to identify the appropriate speech segment of another speaker. The mel-frequency cepstrum coefficient (MFCC) and the spectral center of gravity (COG) in the low-, middle-, and high-frequency regions were selected. A perceptual paired comparison test was carried out to confirm the effectiveness of the spectral features. The results showed that the MFCC was effective for spectra across a wide range of frequency regions, the COG was effective in the low- and high-frequency regions, and the effective spectral features differed among the original speakers.
Keiichi HIROSE Tadatoshi BABASAKI
To develop the advanced and rich life, and the also economy and social activity continuously, various types of energy are necessary. At the same time, to protect the global environment and to prevent the depletion of natural resources, the effective and moreover efficient use of energy is becoming important. Electric power is one of the most important forms of energy for our life and society. This paper describes topics and survey results of technical trends regarding the electric power supply systems which are playing a core role as the important infrastructure to support the emergence of information-oriented society. Specifically, the power supply systems that enhance high power quality and reliability (PQR) are important for the steady growth of information and communication services. The direct current (DC) power, which has been used for telecommunications power systems and information and communications technologies (ICT), enables existing utilities' grid and distributed energy resources to keep a balance between supply and demand of small-scaled power systems or microgirds. These techniques are expected to be part of smartgrid technologies and facilitate the installation of distributed generators in mission critical facilities.
This work examines energy trajectory and voice quality measurements, in addition to conventional formant and duration properties, to classify tense and lax vowels in English. Tense and lax vowels are produced with differing articulatory configurations which can be identified by measuring acoustic cues such as energy peak location, energy convexity, open quotient and spectral tilt. An analysis of variance (ANOVA) is conducted, and dialect effects are observed. An overall 85.2% classification rate is obtained using the proposed features on the TIMIT database, resulting in improvement over using only conventional acoustic features. Adding the proposed features to widely used cepstral features also results in improved classification.
Xiyang LI Pingzhi FAN Dianhua WU
Optical code-division multiple-access (OCDMA) is a promising technique for multimedia transmission in fiber-optic local-area networks (LANs). Variable-weight optical orthogonal codes (OOCs) can be used for OCDMA networks supporting multiple quality of services (QoS). Most constructions for optimal variable-weight OOCs have focused on the case where the number of distinct Hamming weights of all codewords is equal to two, and the codewords of weight 3 are normally included. In this letter, four explicit constructions of optimal (υ,{4,5,6},1,Q)-OOCs are presented, and more new optimal (υ,{4,5,6},1,Q)-OOCs are obtained via recursive constructions. These improve the existing results on optimal variable-weight OOCs with at least three distinct Hamming weights and minimum Hamming weight 4.
Sumiko MIYATA Katsunori YAMAOKA
We have proposed a novel call admission control (CAC) for maximizing total user satisfaction in a heterogeneous traffic network and showed the effectiveness of our CAC by using an optimal threshold from numerical analysis [1]. In our previous CAC, when a new broadband flow arrives and the total accommodated bandwidth is more than or equal to the threshold, the arriving new broadband flow is rejected. In actual networks, however, users may agree to wait for a certain period until the broadband flow, such as video, begins to play. In this paper, when total accommodated bandwidth is more than or equal to the threshold, arriving broadband flows wait instead of being rejected. As a result, we can greatly improve total user satisfaction.
Chien-Sheng CHEN Yi-Wen SU Wen-Hsiung LIU Ching-Lung CHI
In this paper a novel and effective two phase admission control (TPAC) for QoS mobile ad hoc networks is proposed that satisfies the real-time traffic requirements in mobile ad hoc networks. With a limited amount of extra overhead, TPAC can avoid network congestions by a simple and precise admission control which blocks most of the overloading flow-requests in the route discovery process. When compared with previous QoS routing schemes such as QoS-aware routing protocol and CACP protocols, it is shown from system simulations that the proposed scheme can increase the system throughput and reduce both the dropping rate and the end-to-end delay. Therefore, TPAC is surely an effective QoS-guarantee protocol to provide for real-time traffic.
Hee-Suk PANG Jun-Seok LIM Oh-Jin KWON Sang Bae CHON Mingu LEE Jeong-Hun SEO
An efficient method is proposed for reconstructing speakerphone-mode cellular phone sound. The overall transfer function from digital PCM signals stored in a cellular phone to dummy head-recorded signals is modeled as a combination of a cellular phone transfer function (CPTF) and a cellular phone-to-listener transfer function (CPLTF). The CPTF represents the linear and nonlinear characteristics of a cellular phone and is modeled by the Volterra model. The CPLTF represents the effect of the path from a cellular phone to a dummy head and is measured. Listening tests show the effectiveness of the proposed method. An application scenario of the proposed method is also addressed for sound quality assessment of cellular phones in speakerphone mode.
Yuya SAITO Jaturong SANGIAMWONG Nobuhiko MIKI Satoshi NAGATA Tetsushi ABE Yukihiko OKUMURA
In Long-Term Evolution (LTE)-Advanced, a heterogeneous network in which femtocells and picocells overlay macrocells is being extensively discussed in addition to traditional well-planned macrocell deployment to improve further the system throughput. In heterogeneous network deployment, cell selection as well as inter-cell interference coordination (ICIC) is very important to improve the system and cell-edge throughput. Therefore, this paper investigates three cell selection methods associated with ICIC in heterogeneous networks in the LTE-Advanced downlink: Signal-to-interference plus noise power ratio (SINR)-based cell selection, reference signal received power (RSRP)-based cell selection, and reference signal received quality (RSRQ)-based cell selection. The results of simulations (4 picocells and 25 sets of user equipment are uniformly located within 1 macrocell) that assume a full buffer model show that the downlink cell and cell-edge user throughput levels of RSRP-based cell selection are degraded by approximately 2% and 11% compared to those for SINR-based cell selection under the condition of maximizing the cell-edge user throughput due to the impairment of the interference level. Furthermore, it is shown that the downlink cell-edge user throughput of RSRQ-based cell selection is improved by approximately 5%, although overall cell throughput is degraded by approximately 6% compared to that for SINR-based cell selection under the condition of maximizing the cell-edge user throughput.
This paper proposes new scheduling algorithms for best effort (BE) traffic classification in business femtocell networks. The purpose of traffic classification is to provide differentiated services to BE users depending on their traffic classes, and the concept of traffic classification is called Inter User Best Effort (IUBE) in CDMA2000 1x Evolution Data Optimized (EVDO) standard. Traffic differentiation is achieved by introducing Grade of Service (GoS) as a quality of service (QoS) parameter into the scheduler's decision metric (DM). New scheduling algorithms are called QoS Round Robin (QoS-RR), QoS Proportionally Fair (QoS-PF), QoS maximum data rate control (DRC) (QoS-maxDRC), QoS average DRC (QoS-aveDRC), QoS exponent DRC (QoS-expDRC), QoS maxDRC-PF (QoS-maxDRC-PF). Two different femtocell throughput experiments are performed using real femtocell devices in order to collect real DRC values. The first experiment examines 4, 8, 12 and 16 IUBE users, while second experiment examines 4 IUBE + 2 Voice over IP (VoIP), 8 IUBE + 2 VoIP, 12 IUBE + 2 VoIP, 16 IUBE + 2 (VoIP) users. Average sector throughput, IUBE traffic differentiation, VoIP delay bound error values are investigated to compare the performance of the proposed scheduling algorithms. In conclusion, QoS-maxDRC-PF scheduler is proposed for business femtocell environment.
In cellular networks, maximizing the energy efficiency (EE) while satisfying certain QoS requirements is challenging. In this article, we utilize effective capacity (EC) theory as an effective means of meeting these challenges. Based on EC and taking a realistic base station (BS) power consumption model into account, we develop a novel energy efficiency (EE) metric: effective energy efficiency (EEE), to represent the delivered service bit per energy consumption at the upper layer with QoS constraints. Maximizing the EEE problem with EC constraints is addressed and then an optimal power control scheme is proposed to solve it. After that, the EEE and EC tradeoff is discussed and the effects of diverse QoS parameters on EEE are investigated through simulations, which provides insights into the quality of service (QoS) provision, and helps the system power consumption optimization.
Sooyong LEE Myungchul KIM Sungwon KANG Ben LEE Kyunghee LEE Soonuk SEOL
Providing seamless QoS guarantees for multimedia services is one of the most critical requirements in the mobile Internet. However, the effects of host mobility make it difficult to provide such services. The next steps in signaling (NSIS) was proposed by the IETF as a new signaling protocol, but it fails to address some mobility issues. This paper proposes a new QoS NSIS signaling layer protocol (QoS NSLP) using a cross-layer design that supports mobility. Our approach is based on the advance discovery of a crossover node (CRN) located at the crossing point between a current and a new signaling path. The CRN then proactively reserves network resources along the new path that will be used after handoff. This proactive reservation significantly reduces the session reestablishment delay and resolves the related mobility issues in NSIS. Only a few amendments to the current NSIS protocol are needed to realize our approach. The experimental results and simulation study demonstrate that our approach considerably enhances the current NSIS in terms of QoS performance factors and network resource usage.