Replication is commonly used in distributed key-value stores for high availability. Recent works show that centralized replication provides high throughput through low-overhead write coordination and consistency-aware read forwarding. Unfortunately, they rely on specialized hardware, which is deploy-challenging and poses various limitations. To this end, we present Dalio, a software-based centralized replication system that does not require extra hardware while supporting high throughput. Our key idea is to offload the replication function to per-shard load balancers with eBPF, an emerging kernel-native technique. By building a replication coordinator with eBPF, we can avoid burdensome kernel networking stack overhead. Our experimental results show that Dalio achieves throughput better than the vanilla Linux by up to 2.05× and is comparable to a hardware-based solution.
Guangjin OUYANG Yong GUO Yu LU Fang HE
With the rapid development of Internet technology, the type and quantity of network traffic data have increased accordingly, and network traffic classification has become an important research task. In previous research, there are methods based on traditional machine learning and deep learning; compared to machine learning, deep learning can obtain good results by converting network traffic into two-dimensional images and utilizing deep learning classification models. However, all of these methods have some limitations: the trained models cannot learn sustainably, and the generalization ability of the models is limited. In order to solve this problem, we propose a network traffic classification methods based on incremental learning and Mixup, which is based on generative adversarial networks. First, the network traffic is converted into a 2D image, the original database is linearly interpolated using Mixup to reduce the overfitting tendency of the model and improve the generalization ability, and the traffic is classified using the ability of deep learning on the image. Secondly, we improve the traditional incremental learning algorithm. To effectively address the imbalance between old and new categories in incremental learning. The experimental results show that the model performs well in classification experiments, reaching 92.26% and 93.86% accuracy on the ISCXVPN2016 and USTC datasets, respectively, and we can maintain a high accuracy rate with limited storage space in the process of increasing new categories.
A backdoor sample attack is an attack that causes a deep neural network to misrecognize data that include a specific trigger because the model has been trained on malicious data that insert triggers into the deep neural network. The deep neural network correctly recognizes data without triggers, but incorrectly recognizes data with triggers. These backdoor attacks have mainly been studied in the image domain; however, defense research in the text domain is insufficient. In this study, we propose a method to defend against textual backdoor samples using a detection model. The proposed method detects a textual backdoor sample by comparing the resulting value of the target model with that of the model trained on the original training data. This method can defend against attacks without access to the entire training data. For the experimental setup, we used the TensorFlow library, and the MR and IMDB datasets were used as the experimental datasets. As a result of the experiment, when 1000 partial training datasets were used to train the detection model, the proposed method could classify the MR and IMDB datasets with detection rates of 79.6% and 83.2%, respectively.
Shotaro SUGITANI Ryuichi NAKAJIMA Keita YOSHIDA Jun FURUTA Kazutoshi KOBAYASHI
Integrated circuits used in automotive or aerospace applications must have high soft error tolerance. Redundant Flip Flops (FFs) are effective to improve the soft error tolerance. However, these countermeasures have large performance overheads and can be excessive for terrestrial applications. This paper proposes two types of radiation-hardened FFs named Primary Latch Transmission gate FF (PLTGFF) and Feed-Back Gate Tri-state Inverter FF (FBTIFF) for terrestrial use. By increasing the critical charge (Qcrit) at weak nodes, soft error tolerance of them were improved with low performance overheads. PLTGFF has the 5% area, 4% delay, and 10% power overheads, while FBTIFF has the 42% area, 10% delay, and 22% power overheads. They were fabricated in a 65 nm bulk process. By α-particle and spallation neutron irradiation tests, the soft error rates are reduced by 25% for PLTGFF and 50% for FBTIFF compared to a standard FF. In the terrestrial environment, the proposed FFs have better trade-offs between reliability and performance than those of multiplexed FFs such as the dual-interlocked storage cell (DICE) with larger overheads than the proposed FFs.
Toru TAKAHASHI Yasunori KATO Kentaro ISODA Yusuke KITSUKAWA
In this paper, a Doppler-tolerant waveform is proposed as a transmitting signal for joint radar and communication systems. In the proposed waveform, communication signals are multiplexed at the side band of a linear frequency modulated (LFM) pulse, based on the orthogonal frequency division multiplexing (OFDM) scheme. Therefore, the proposed waveform can maintain Doppler-tolerance in radar use as well as the original LFM pulse can. In addition, it is also capable of flexibly increasing the transmission rate in communication use by assigning more communication signals at the side-band subcarriers. Numerical simulations were carried out to comprehensively examine the proposed waveform in terms of the probability of detection in radar use and the symbol error rate in communication use. In conclusion, the proposed waveform is suited to the transmitting signal for joint radar and communication systems, especially with maintaining Doppler-tolerance to detect fast-moving targets.
Yasuyuki MAEKAWA Koichi HARADA Junichi ABE Fumihiro YAMASHITA
The signal levels of Ku-band BS broadcast radio wave and JCSAT-5A beacon radio wave have been simultaneously measured at Osaka Electro-Communication University (OECU, Neyagawa, Osaka), NTT Yokosuka R&D Center (Yokosuka, Kanagawa), and satellite base station (Matsuyama, Ehime), respectively, from April 2022 to March 2023. The yearly cumulative distribution of rain attenuation at Yokosuka station shows the same increasing tendency compared to the ITU-R recommendations, as at Neyagawa station, while the increasing tendency is not clear at Matsuyama station. Also, site diversity techniques are examined among these three stations with relatively long distances of about 300-700 km. The site diversity effects among the three stations are almost consistent with the ITU-R recommendations between eastern and western areas of Japan. The 99.9% annual available time (0.1% unavailable time) percentage of satellite operations is shown to be guaranteed by the rain margins of 3-5 dB for the yearly rain attenuation statistics at the three stations. The monthly rain attenuation statistics, however, indicate that the rain margins of 6-10 dB are required to maintain the same 99.9% available time percentage primarily around summer time. The increase in rain margins is successfully suppressed under 3 dB using the site diversity operations. This increase in rain margins is well explained by the worst month statistics of the ITU-R recommendations.
Kohei NOZAKI Yuyuan CHANG Kazuhiko FUKAWA Daichi HIRAHARA
In a space-based automatic identification system (AIS), a satellite has a wide coverage area and thus can receive AIS signals from ships in the high seas. However, wide coverage can cause multiple AIS packets to collide with each other at the satellite receiver. Furthermore, transmitted packets are affected by channel parameters, such as Doppler shifts, channel impulse response, and propagation delay time, which are remarkably different in each packet because these parameters depend on the distance and relative speed between ships and the satellite. Therefore, these parameters should be estimated and used for multiuser detection to detect collided packets separately. Nevertheless, when the received power difference between packets or desired to undesired signal power ratio (DUR) is low, the accuracy of the channel parameter estimation is degraded so severely, such that multiuser detection cannot maintain sufficient bit error rate (BER) performance. To compensate for the reduced accuracy, this paper proposes a highly accurate channel estimation method. In addition to the conventional correlation-based channel estimation, the proposed method applies the quasi-Newton and least-squares methods to estimate Doppler frequency and channel impulse response, respectively. Regarding the propagation delay time, the conventional correlation-based channel estimation is repeated for improvement. Multiuser detection based on the Viterbi algorithm is performed using the estimated channel parameters. Computer simulations were conducted under the conditions of two collision packets and Rician fading, and the results show that the proposed method can significantly improve the accuracy of the channel estimation and BER performance than the conventional method.
Shibo DONG Haotian LI Yifei YANG Jiatianyi YU Zhenyu LEI Shangce GAO
The multiple chaos embedded gravitational search algorithm (CGSA-M) is an optimization algorithm that utilizes chaotic graphs and local search methods to find optimal solutions. Despite the enhancements introduced in the CGSA-M algorithm compared to the original GSA, it exhibits a pronounced vulnerability to local optima, impeding its capacity to converge to a globally optimal solution. To alleviate the susceptibility of the algorithm to local optima and achieve a more balanced integration of local and global search strategies, we introduce a novel algorithm derived from CGSA-M, denoted as CGSA-H. The algorithm alters the original population structure by introducing a multi-level information exchange mechanism. This modification aims to mitigate the algorithm’s sensitivity to local optima, consequently enhancing the overall stability of the algorithm. The effectiveness of the proposed CGSA-H algorithm is validated using the IEEE CEC2017 benchmark test set, consisting of 29 functions. The results demonstrate that CGSA-H outperforms other algorithms in terms of its capability to search for global optimal solutions.
Visible-infrared person re-identification (VI-ReID) aims to achieve cross-modality matching between the visible and infrared modalities, thus enabling usage in all-day monitoring scenarios. Existing VI-ReID methods have indeed achieved promising performance by considering the global information for identity-related discriminative learning. However, they often overlook the importance of local information, which can contribute significantly to learning identity-specific discriminative cues. Moreover, the substantial modality gap typically poses challenges during the model training process. In response to the aforementioned issues, we propose a VI-ReID method called partial enhancement and channel aggregation (PECA) and make efforts in the following three aspects. Firstly, to capture local information, we introduce the global-local similarity learning (GSL) module, which compels the encoder to focus on fine-grained details by increasing the similarity between global and local features within various feature spaces. Secondly, to address the modality gap, we propose an inter-modality channel aggregation learning (ICAL) approach, which progressively guides the learning of modality-invariant features. ICAL not only progressively alleviates modality gap but also augments the training data. Additionally, we introduce a novel instance-modality contrastive loss, which facilitates the learning of modality-invariant and identity-related features at both the instance and modality levels. Extensive experiments on the SYSU-MM01 and RegDB datasets have shown that PECA outperforms state-of-the-art methods.
Takumi INABA Takatsugu ONO Koji INOUE Satoshi KAWAKAMI
The performance improvement by CMOS circuit technology is reaching its limits. Many researchers have been studying computing technologies that use emerging devices to challenge such critical issues. Nanophotonic technology is a promising candidate for tackling the issue due to its ultra-low latency, high bandwidth, and low power characteristics. Although previous research develops hardware accelerators by exploiting nanophotonic circuits for AI inference applications, there has never been considered for the acceleration of training that requires complex Floating-Point (FP) operations. In particular, the design balance between optical and electrical circuits has a critical impact on the latency, energy, and accuracy of the arithmetic system, and thus requires careful consideration of the optimal design. In this study, we design three types of Opto-Electrical Floating-point Multipliers (OEFMs): accuracy-oriented (Ao-OEFM), latency-oriented (Lo-OEFM), and energy-oriented (Eo-OEFM). Based on our evaluation, we confirm that Ao-OEFM has high noise resistance, and Lo-OEFM and Eo-OEFM still have sufficient calculation accuracy. Compared to conventional electrical circuits, Lo-OEFM achieves an 87% reduction in latency, and Eo-OEFM reduces energy consumption by 42%.
This study introduces a pattern-matching method to enhance the efficiency and accuracy of physical verification of cell libraries. The pattern-matching method swiftly compares layouts of all I/O units within a specific area, identifying significantly different I/O units. Utilizing random sampling or full permutation can improve the efficiency of verification of I/O cell libraries. All permutations within an 11-unit I/O unit library can produce 39,916,800 I/O units (11!), far exceeding the capacity of current IC layout software. However, the proposed algorithm generates the layout file within 1 second and significantly reduces the DRC verification time from infinite duration to 63 seconds executing 415 DRC rules. This approach effectively improves the potential to detect layer density errors in I/O libraries. While conventional processes detect layer density and DRC issues only when adjacent I/O cells are placed due to layout size and machine constraints, in this work, the proposed algorithm selectively generates multiple distinct combinations of I/O cells for verification, crucial for improving the accuracy of physical design.
Yasuyuki MAEKAWA Yoshiaki SHIBAGAKI Tomoyuki TAKAMI
The effects of site diversity techniques on Ku-band rain attenuation are investigated using two kinds of simultaneous BS (Broadcasting Satellite) signal observations: one was conducted among Osaka Electro-Communication University (OECU) in Neyagawa, Kyoto University in Uji, and Shigaraki MU Observatory in Koka for the past ten years, and the other was conducted among the headquarter of OECU in Neyagawa and their other premises in Shijonawate and Moriguchi for the past seven years, respectively. The site diversity effects among these sites with horizontal separations of 3-50 km are found to be largely affected by the passage direction of rain areas characterized by each rain type, such as warm, cold, and stationary fronts or typhoon and shower. The performance of the site diversity primarily depends on the effective distance between the sites projected to the rain area motions. The unavailable time percentages are theoretically shown to be reduced down to about 61-73% of the ITU-R predictions by choosing a pair of the sites aligned closest to the rain area motion in the distance of 3-50 km. Then, we propose three kinds of novel site diversity methods that choose the pair of sites based on such as rain type, rain front motion, or rain area motion at each rainfall event, respectively. As a result, the first method, which statistically accumulates the average passage directions of each rain type from long-term observations, is even useful for practical operations of the site diversity, because unavailable time percentages are reduced down to about 75-85% compared with the theoretical limit of about 61-73%. Also, the third method based on the rain area motion directly obtained from the three-site observations yields the reduction in unavailable time percentages close to this theoretical limit.
Ken WATANABE Ryo OKUMURA Akihiko HIRATA Thomas KÜRNER
To shorten the distance between base stations (BSs) and user terminals, next-generation mobile communications (6G) plans to install large numbers of remote antenna units (RAUs) on traffic signals and street lights and connect these RAUs to base band units (BBUs) on buildings using terahertz (THz) band fronthaul radio lines capable of data rates that exceed 100 Gbit/s. However, when THz band fronthaul wireless circuits are densely deployed in urban areas, the challenge is to maintain line-of-sight (LOS) between RAUs and BBUs and prevent interference between fronthaul wireless links. In this study, the three-dimensional (3D) radiation pattern of a 300-GHz-band high-gain antenna was measured using the near-field-to-far-field (NF-FF) conversion method, and the accuracy was compared with the far-field measurement results. Moreover, an algorithm for automatically deploying a 300-GHz-band wireless fronthaul link is proposed, which can be used to position BBUs in locations where one BBU can be connected to as many RAUs as possible. Propagation simulations for fronthaul wireless links placed by the automatic deployment algorithm, using the measured 3D radiation patterns from high-gain antennas, show no interference between the fronthaul wireless links.
Archana K. RAJAN Masaki BANDAI
Constrained Application Protocol (CoAP) is a popular UDP based data transfer protocol designed for constrained nodes in lossy networks. Congestion control method is essentially required in such environment to ensure proper data transfer. CoAP offers a default congestion control technique based on binary exponential backoff (BEB) where the default method calculates retransmission time out (RTO) irrespective of Round Trip Time (RTT). CoAP simple congestion control/advanced (COCOA) is a standard alternative algorithm. COCOA computes RTO using exponential weighted moving average (EWMA) based on type of RTT; strong RTT or weak RTT. The constant weight values used in COCOA delays the variability of RTO. This delay in converging the RTO can cause QoS problems in many IoT applications. In this paper, we propose a new method called Flexi COCOA to accomplish a flexible weight based RTO computation for strong RTT samples. We show that Flexi COCOA is more network sensitive than COCOA. Specifically, the variable weight focuses on deriving better RTO and utilizing the resources more. Flexi COCOA is implemented and validated against COCOA using the Cooja simulator in Contiki OS. We carried out extensive simulations using different topologies, packet sending rates and packet error rates. Our results show that Flexi COCOA outshines COCOA and can improve QoS of IoT monitoring applications.
Bit-interleaved coded modulation with iterative decoding (BICM-ID) effectively provides a high spectral efficiency and coding gain for digital coherent systems over additive white Gaussian noise (AWGN) and optical fiber channels. We previously proposed combining probabilistic amplitude shaping (PAS) with BICM-ID to further improve the system performance. However, the BICM-ID performance depends on the binary labeling scheme used for the constellation points. In this study, we evaluated the effect of binary labeling schemes on the performance of the PAS with BICM-ID system. Numerical simulations showed that the PAS with BICM-ID system employing a suitable binary labeling scheme offers a significant coding gain over both the AWGN and optical fiber channels. The system is also robust against performance degradation caused by the optical Kerr effect in the optical fiber channel. We used an extrinsic information transfer (EXIT) chart to analyze the suitability of binary labeling schemes and the effect of bit interleavers. The results showed that a binary labeling scheme is suitable if the slope of the demodulator’s EXIT curve is close to the slope of the decoder’s EXIT curve. The EXIT chart analysis also showed that inserting bit interleavers mitigates the performance degradation during iterative decoding. In addition, we used bitwise mutual information to evaluate the SNR penalty due to shaping gap and coding gap, and coding gain offered by iterative decoding of BICM-ID.
Dinesh DAULTANI Masayuki TANAKA Masatoshi OKUTOMI Kazuki ENDO
Image classification is a typical computer vision task widely used in practical applications. The images used for training image classification networks are often clean, i.e., without any image degradation. However, Convolutional neural networks trained on clean images perform poorly on degraded or corrupted images in the real world. In this study, we effectively utilize robust data augmentation (DA) with knowledge distillation to improve the classification performance of degraded images. We first categorize robust data augmentations into geometric-and-color and cut-and-delete DAs. Next, we evaluate the effectual positioning of cut-and-delete DA when we apply knowledge distillation. Moreover, we also experimentally demonstrate that combining the RandAugment and Random Erasing approach for geometric-and-color and cut-and-delete DA improves the generalization of the student network during the knowledge transfer for the classification of degraded images.
Yuya TAKADA Rikuto MOCHIDA Miya NAKAJIMA Syun-suke KADOYA Daisuke SANO Tsuyoshi KATO
Sign constraints are a handy representation of domain-specific prior knowledge that can be incorporated to machine learning. This paper presents new stochastic dual coordinate ascent (SDCA) algorithms that find the minimizer of the empirical risk under the sign constraints. Generic surrogate loss functions can be plugged into the proposed algorithm with the strong convergence guarantee inherited from the vanilla SDCA. The prediction performance is demonstrated on the classification task for microbiological water quality analysis.
In this study, we devise several seat selection screens for a movie theater ticket reservation system that applies nudges to achieve spatial crowd smoothing without relying on economic incentives. We design three types of nudges that achieve the following: (i) render seats in less-crowded areas noticeable; (ii) present social norms; and (iii) suggest seats in less-crowded areas to people who have selected seats in crowded areas. Results of verification experiment show that (ii) the presentation of social norms is generally effective in avoiding congestion regardless of the ticket sales and (ii) the text of the presented social norms is more effective in avoiding congestion when it contains motivational sentences than when it is verbally expressed. Furthermore, the results indicate that (i) rendering seats in less-crowded areas more conspicuous and (iii) suggesting seats in less-crowded areas to those who select seats in more crowded areas may be effective in avoiding congestion, depending on the ticket sales. Consequently, the feasibility of spatial crowd smoothing without relying on economic incentives for the seat selection screen of a ticket reservation system that applies nudges is demonstrated.
Transcatheter renal denervation (RDN) is a treatment for resistant hypertension, which is performed by ablating the renal nerves located outside the artery using a catheter from inside the artery. Our previous studies simulated the temperature during RDN by using constant physical properties of biological tissue to validate the various catheter RDN devices. Some other studies report temperature dependence of physical properties of biological tissues. However, there are no studies that have measured the electrical properties of low water content tissues. Adipose tissue, a type of low water content tissue, is related to RDN closely. Therefore, it is important to know the temperature dependence of the electrical constants of adipose tissue. In this study, we measured the relationship between the electrical constants and the temperature of bovine adipose tissue. Next, the obtained equation of the relationship between relative permittivity of adipose tissue and temperature was introduced. In addition, the temperature dependence of the electrical constants of high water content tissues and the temperature dependence of the thermal constants of biological tissues were also introduced into the temperature analysis. After 180 seconds of heating, the temperature of the model with the temperature dependence of the physical properties was 7.25°C lower than the model without the temperature dependence of the physical properties at a certain position. From the results, it can be said that the temperature dependence of physical properties will be significant when an accurate temperature analysis is required.
In a 100VDC/5A resistive circuit, silver electrical contacts with airflow ejection structure are separated at a constant speed. Break arcs are generated between the contacts and blown by the airflow between the contact gap. Airflow rate is varied by changing shapes of the contacts. The break arcs are observed by two high-speed cameras. Following results are shown. Arc duration is shortened by the airflow. When the airflow rate is increased, the arc duration becomes shorter, and the break arcs are driven farther outward from the center axis of the contacts and are extinguished in a shorter length.