Yuanlong CAO Ruiwen JI Lejun JI Xun SHAO Gang LEI Hao WANG
With multiple network interfaces are being widely equipped in modern mobile devices, the Multipath TCP (MPTCP) is increasingly becoming the preferred transport technique since it can uses multiple network interfaces simultaneously to spread the data across multiple network paths for throughput improvement. However, the MPTCP performance can be seriously affected by the use of a poor-performing path in multipath transmission, especially in the presence of network attacks, in which an MPTCP path would abrupt and frequent become underperforming caused by attacks. In this paper, we propose a multi-expert Learning-based MPTCP variant, called MPTCP-meLearning, to enhance MPTCP performance robustness against network attacks. MPTCP-meLearning introduces a new kind of predictor to possibly achieve better quality prediction accuracy for each of multiple paths, by leveraging a group of representative formula-based predictors. MPTCP-meLearning includes a novel mechanism to intelligently manage multiple paths in order to possibly mitigate the out-of-order reception and receive buffer blocking problems. Experimental results demonstrate that MPTCP-meLearning can achieve better transmission performance and quality of service than the baseline MPTCP scheme.
Kang Woo CHO Byeong-Gyu JEONG Sang Uk SHIN
The continuous development of the mobile computing environment has led to the emergence of fintech to enable convenient financial transactions in this environment. Previously proposed financial identity services mostly adopted centralized servers that are prone to single-point-of-failure problems and performance bottlenecks. Blockchain-based self-sovereign identity (SSI), which emerged to address this problem, is a technology that solves centralized problems and allows decentralized identification. However, the verifiable credential (VC), a unit of SSI data transactions, guarantees unlimited right to erasure for self-sovereignty. This does not suit the specificity of the financial transaction network, which requires the restriction of the right to erasure for credit evaluation. This paper proposes a model for VC generation and revocation verification for credit scoring data. The proposed model includes double zero knowledge - succinct non-interactive argument of knowledge (zk-SNARK) proof in the VC generation process between the holder and the issuer. In addition, cross-revocation verification takes place between the holder and the verifier. As a result, the proposed model builds a trust platform among the holder, issuer, and verifier while maintaining the decentralized SSI attributes and focusing on the VC life cycle. The model also improves the way in which credit evaluation data are processed as VCs by granting opt-in and the special right to erasure.
This paper proposes a switched pinning control method with a multi-rating mechanism for vehicle platoons. The platoons are expressed as multi-agent systems consisting of mass-damper systems in which pinning agents receive target velocities from external devices (ex. intelligent traffic signals). We construct model predictive control (MPC) algorithm that switches pinning agents via mixed-integer quadratic programmings (MIQP) problems. The optimization rate is determined according to the convergence rate to the target velocities and the inter-vehicular distances. This multi-rating mechanism can reduce the computational load caused by iterative calculation. Numerical results demonstrate that our method has a reduction effect on the string instability by selecting the pinning agents to minimize errors of the inter-vehicular distances to the target distances.
In [31], Shin et al. proposed a Leakage-Resilient and Proactive Authenticated Key Exchange (LRP-AKE) protocol for credential services which provides not only a higher level of security against leakage of stored secrets but also secrecy of private key with respect to the involving server. In this paper, we discuss a problem in the security proof of the LRP-AKE protocol, and then propose a modified LRP-AKE protocol that has a simple and effective measure to the problem. Also, we formally prove its AKE security and mutual authentication for the entire modified LRP-AKE protocol. In addition, we describe several extensions of the (modified) LRP-AKE protocol including 1) synchronization issue between the client and server's stored secrets; 2) randomized ID for the provision of client's privacy; and 3) a solution to preventing server compromise-impersonation attacks. Finally, we evaluate the performance overhead of the LRP-AKE protocol and show its test vectors. From the performance evaluation, we can confirm that the LRP-AKE protocol has almost the same efficiency as the (plain) Diffie-Hellman protocol that does not provide authentication at all.
Masashi MIZOGUCHI Toshimitsu USHIO
The Smith method has been used to control physical plants with dead time components, where plant states after the dead time is elapsed are predicted and a control input is determined based on the predicted states. We extend the method to the symbolic control and design a symbolic Smith controller to deal with a nondeterministic embedded system. Due to the nondeterministic transitions, the proposed controller computes all reachable plant states after the dead time is elapsed and determines a control input that is suitable for all of them in terms of a given control specification. The essence of the Smith method is that the effects of the dead time are suppressed by the prediction, however, which is not always guaranteed for nondeterministic systems because there may exist no control input that is suitable for all predicted states. Thus, in this paper, we discuss the existence of a deadlock-free symbolic Smith controller. If it exists, it is guaranteed that the effects of the dead time can be suppressed and that the controller can always issue the control input for any reachable state of the plant. If it does not exist, it is proved that the deviation from the control specification is essentially inevitable.
To cope with complicated interference scenarios in realistic acoustic environment, supervised deep neural networks (DNNs) are investigated to estimate different user-defined targets. Such techniques can be broadly categorized into magnitude estimation and time-frequency mask estimation techniques. Further, the mask such as the Wiener gain can be estimated directly or derived by the estimated interference power spectral density (PSD) or the estimated signal-to-interference ratio (SIR). In this paper, we propose to incorporate the multi-task learning in DNN-based single-channel speech enhancement by using the speech presence probability (SPP) as a secondary target to assist the target estimation in the main task. The domain-specific information is shared between two tasks to learn a more generalizable representation. Since the performance of multi-task network is sensitive to the weight parameters of loss function, the homoscedastic uncertainty is introduced to adaptively learn the weights, which is proven to outperform the fixed weighting method. Simulation results show the proposed multi-task scheme improves the speech enhancement performance overall compared to the conventional single-task methods. And the joint direct mask and SPP estimation yields the best performance among all the considered techniques.
Enze YANG Shuoyan LIU Yuxin LIU Kai FANG
Crowd flow prediction in high density urban scenes is involved in a wide range of intelligent transportation and smart city applications, and it has become a significant topic in urban computing. In this letter, a CNN-based framework called Pyramidal Spatio-Temporal Network (PSTNet) for crowd flow prediction is proposed. Spatial encoding is employed for spatial representation of external factors, while prior pyramid enhances feature dependence of spatial scale distances and temporal spans, after that, post pyramid is proposed to fuse the heterogeneous spatio-temporal features of multiple scales. Experimental results based on TaxiBJ and MobileBJ demonstrate that proposed PSTNet outperforms the state-of-the-art methods.
Shigekazu KIMURA Toshio KAWASAKI
For improving the fifth-generation mobile communication system, a highly efficient power amplifier must be designed for the base station. An outphasing amplifier is expected to be a solution for achieving high efficiency. We designed a combiner, one of the key components of the outphasing amplifier, using a serial Chireix combiner and fabricated an amplifier with a GaN HEMT, achieving 70% or more high efficiency up to 9 dB back-off power in an 800 MHz band. We also fabricated a 2 GHz-band outphasing amplifier with the same design. We applied digital predistortion (DPD) to control the balance of amplifying units in this amplifier and achieved an average efficiency of 65% under a 20 MHz modulation bandwidth.
Contamination of water resources with pathogenic microorganisms excreted in human feces is a worldwide public health concern. Surveillance of fecal contamination is commonly performed by routine monitoring for a single type or a few types of microorganism(s). To design a feasible routine for periodic monitoring and to control risks of exposure to pathogens, reliable statistical algorithms for inferring correlations between concentrations of microorganisms in water need to be established. Moreover, because pathogens are often present in low concentrations, some contaminations are likely to be under a detection limit. This yields a pairwise left-censored dataset and complicates computation of correlation coefficients. Errors of correlation estimation can be smaller if undetected values are imputed better. To obtain better imputations, we utilize side information and develop a new technique, the asymmetric Tobit model which is an extension of the Tobit model so that domain knowledge can be exploited effectively when fitting the model to a censored dataset. The empirical results demonstrate that imputation with domain knowledge is effective for this task.
We consider an asymptotic stabilization problem for a chain of integrators by using an event-triggered controller. The times required between event-triggered executions and controller updates are uncertain, time-varying, and not necessarily small. We show that the considered system can be asymptotically stabilized by an event-triggered gain-scaling controller. Also, we show that the interexecution times are lower bounded and their lower bounds can be manipulated by a gain-scaling factor. Some future extensions are also discussed. An example is given for illustration.
Yuya KAMATAKI Yusuke KAMEDA Yasuyo KITA Ichiro MATSUDA Susumu ITOH
This paper proposes a lossless coding method for HDR color images stored in a floating point format called Radiance RGBE. In this method, three mantissa and a common exponent parts, each of which is represented in 8-bit depth, are encoded using the block-adaptive prediction technique with some modifications considering the data structure.
Yasunori SUZUKI Shoichi NARAHASHI
This paper presents linearization technologies for high efficiency power amplifiers of cellular base stations. These technologies are important to actualizing highly efficient power amplifiers that reduce power consumption of the base station equipment and to achieving a sufficient non-linear distortion compensation level. It is well known that it is very difficult for a power amplifier using linearization technologies to achieve simultaneously high efficiency and a sufficient non-linear distortion compensation level. This paper presents two approaches toward addressing this technical issue. The first approach is a feed-forward power amplifier using the Doherty amplifier as the main amplifier. The second approach is a digital predistortion linearizer that compensates for frequency dependent intermodulation distortion components. Experimental results validate these approaches as effective for providing power amplification for base stations.
Masayuki TEZUKA Keisuke TANAKA
Redactable signature allows anyone to remove parts of a signed message without invalidating the signature. The need to prove the validity of digital documents issued by governments is increasing. When governments disclose documents, they must remove private information concerning individuals. Redactable signature is useful for such a situation. However, in most redactable signature schemes, to remove parts of the signed message, we need pieces of information for each part we want to remove. If a signed message consists of ℓ elements, the number of elements in an original signature is at least linear in ℓ. As far as we know, in some redactable signature schemes, the number of elements in an original signature is constant, regardless of the number of elements in a message to be signed. However, these constructions have drawbacks in that the use of the random oracle model or generic group model. In this paper, we construct an efficient redactable signature to overcome these drawbacks. Our redactable signature is obtained by combining set-commitment proposed in the recent work by Fuchsbauer et al. (JoC 2019) and digital signatures.
Takashi YASUI Jun-ichiro SUGISAKA Koichi HIRAYAMA
In this study, we conduct guided mode analyses for chalcogenide glass channel waveguides using As2Se3 core and As2S3 lower cladding to determine their single-mode conditions across the astronomical N-band (8-12µm). The results reveal that a single-mode operation over the band can be achieved by choosing a suitable core-thickness.
A neural network that outputs reconstructed images based on projection data containing scattered X-rays is presented, and the proposed scheme exhibits better accuracy than conventional computed tomography (CT), in which the scatter information is removed. In medical X-ray CT, it is a common practice to remove scattered X-rays using a collimator placed in front of the detector. In this study, the scattered X-rays were assumed to have useful information, and a method was devised to utilize this information effectively using a neural network. Therefore, we generated 70,000 projection data by Monte Carlo simulations using a cube comprising 216 (6 × 6 × 6) smaller cubes having random density parameters as the target object. For each projection simulation, the densities of the smaller cubes were reset to different values, and detectors were deployed around the target object to capture the scattered X-rays from all directions. Then, a neural network was trained using these projection data to output the densities of the smaller cubes. We confirmed through numerical evaluations that the neural-network approach that utilized scattered X-rays reconstructed images with higher accuracy than did the conventional method, in which the scattered X-rays were removed. The results of this study suggest that utilizing the scattered X-ray information can help significantly reduce patient dosing during imaging.
Keiichiro SATO Ryoichi SHINKUMA Takehiro SATO Eiji OKI Takanori IWAI Takeo ONISHI Takahiro NOBUKIYO Dai KANETOMO Kozo SATODA
Predictive spatial-monitoring, which predicts spatial information such as road traffic, has attracted much attention in the context of smart cities. Machine learning enables predictive spatial-monitoring by using a large amount of aggregated sensor data. Since the capacity of mobile networks is strictly limited, serious transmission delays occur when loads of communication traffic are heavy. If some of the data used for predictive spatial-monitoring do not arrive on time, prediction accuracy degrades because the prediction has to be done using only the received data, which implies that data for prediction are ‘delay-sensitive’. A utility-based allocation technique has suggested modeling of temporal characteristics of such delay-sensitive data for prioritized transmission. However, no study has addressed temporal model for prioritized transmission in predictive spatial-monitoring. Therefore, this paper proposes a scheme that enables the creation of a temporal model for predictive spatial-monitoring. The scheme is roughly composed of two steps: the first involves creating training data from original time-series data and a machine learning model that can use the data, while the second step involves modeling a temporal model using feature selection in the learning model. Feature selection enables the estimation of the importance of data in terms of how much the data contribute to prediction accuracy from the machine learning model. This paper considers road-traffic prediction as a scenario and shows that the temporal models created with the proposed scheme can handle real spatial datasets. A numerical study demonstrated how our temporal model works effectively in prioritized transmission for predictive spatial-monitoring in terms of prediction accuracy.
While online communities are important platforms for various social activities, many online communities fail to survive, which motivates researchers to investigate factors affecting the growth and survival of online communities. We comprehensively examine the effects of a wide variety of social network features on the growth and survival of communities in Reddit. We show that several social network features, including clique ratio, density, clustering coefficient, reciprocity and centralization, have significant effects on the survival of communities. In contrast, we also show that social network features examined in this paper only have weak effects on the growth of communities. Moreover, we conducted experiments predicting future growth and survival of online communities utilizing social network features as well as contents and activity features in the communities. The results show that prediction models utilizing social network features as well as contents and activity features achieve approximately 30% higher F1 measure, which evaluates the prediction accuracy, than the models only using contents and activity features. In contrast, it is also shown that social network features are not effective for predicting the growth of communities.
Hiroaki KUDO Tetsuya MATSUMOTO Kentaro KUTSUKAKE Noritaka USAMI
In this paper, we evaluate a prediction method of regions including dislocation clusters which are crystallographic defects in a photoluminescence (PL) image of multicrystalline silicon wafers. We applied a method of a transfer learning of the convolutional neural network to solve this task. For an input of a sub-region image of a whole PL image, the network outputs the dislocation cluster regions are included in the upper wafer image or not. A network learned using image in lower wafers of the bottom of dislocation clusters as positive examples. We experimented under three conditions as negative examples; image of some depth wafer, randomly selected images, and both images. We examined performances of accuracies and Youden's J statistics under 2 cases; predictions of occurrences of dislocation clusters at 10 upper wafer or 20 upper wafer. Results present that values of accuracies and values of Youden's J are not so high, but they are higher results than ones of bag of features (visual words) method. For our purpose to find occurrences dislocation clusters in upper wafers from the input wafer, we obtained results that randomly select condition as negative examples is appropriate for 10 upper wafers prediction, since its results are better than other negative examples conditions, consistently.
Takeharu IKEZOE Takuya KOJIMA Hideharu AMANO
Recent IoT devices require extremely low standby power consumption, while a certain performance is needed during the active time, and Coarse-Grained Reconfigurable Arrays (CGRAs) have received attention because of their high energy efficiency. For further reduction of the standby energy consumption of CGRAs, the leakage power for their configuration memory must be reduced. Although the power gating is a common technique, the lost data in flip-flops and memory must be retrieved after the wake-up. Recovering everything requires numerous state transitions and considerable overhead both on its execution time and energy. To address the problem, Non-volatile Cool Mega Array (NVCMA), a CGRA providing non-volatile flip-flops (NVFFs) with spin transfer torque type non-volatile memory (NVM) technology has been developed. However, in general, non-volatile memory technologies have problems with reliability. Some NVFFs are stacked-at-0/1, and cannot store the data in a certain possibility. To improve the chip yield, we propose a mapping algorithm to avoid faulty processing elements of the CGRA caused by the erroneous configuration data. Next, we also propose a method to add an error-correcting code (ECC) mechanism to NVFFs for the configuration and constant memory. The proposed method was applied to NVCMA to evaluate the availability rate and reduction of write time. By using both methods, the average availability ratio of 94.2% was achieved, while the average availability ratio of the nine applications was 0.056% when the probability of failure of the FF was 0.01. The energy for storing data becomes about 2.3 times because of the hardware overhead of ECC but the proposed method can save 8.6% of the writing power on average.
Yahui WANG Wenxi ZHANG Xinxin KONG Yongbiao WANG Hongxin ZHANG
Laser speech detection uses a non-contact Laser Doppler Vibrometry (LDV)-based acoustic sensor to obtain speech signals by precisely measuring voice-generated surface vibrations. Over long distances, however, the detected signal is very weak and full of speckle noise. To enhance the quality and intelligibility of the detected signal, we designed a two-sided Linear Prediction Coding (LPC)-based locator and interpolator to detect and replace speckle noise. We first studied the characteristics of speckle noise in detected signals and developed a binary-state statistical model for speckle noise generation. A two-sided LPC-based locator was then designed to locate the polluted samples, composed of an inverse decorrelator, nonlinear filter and threshold estimator. This greatly improves the detectability of speckle noise and avoids false/missed detection by improving the noise-to-signal-ratio (NSR). Finally, samples from both sides of the speckle noise were used to estimate the parameters of the interpolator and to code samples for replacing the polluted samples. Real-world speckle noise removal experiments and simulation-based comparative experiments were conducted and the results show that the proposed method is better able to locate speckle noise in laser detected speech and highly effective at replacing it.