Keyword Search Result

[Keyword] sample(109hit)

1-20hit(109hit)

  • Quantum Search-to-Decision Reduction for the LWE Problem Open Access

    Kyohei SUDO  Keisuke HARA  Masayuki TEZUKA  Yusuke YOSHIDA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2024/08/16
      Vol:
    E108-A No:2
      Page(s):
    104-116

    The learning with errors (LWE) problem is one of the fundamental problems in cryptography and it has many applications in post-quantum cryptography. There are two variants of the problem, the decisional-LWE problem, and the search-LWE problem. LWE search-to-decision reduction shows that the hardness of the search-LWE problem can be reduced to the hardness of the decisional-LWE problem. The efficiency of the reduction can be regarded as the gap in difficulty between the problems. We initiate a study of quantum search-to-decision reduction for the LWE problem and propose a reduction that satisfies sample-preserving. In sample-preserving reduction, it preserves all parameters even the number of instances. Especially, our quantum reduction invokes the distinguisher only 2 times to solve the search-LWE problem, while classical reductions require a polynomial number of invocations. Furthermore, we give a way to amplify the success probability of the reduction algorithm. Our amplified reduction is incomparable to the classical reduction in terms of sample complexity and query complexity. Our reduction algorithm supports a wide class of error distributions and also provides a search-to-decision reduction for the learning parity with noise problem. In the process of constructing the search-to-decision reduction, we give a quantum Goldreich-Levin theorem over ℤq where q is a prime. In short, this theorem states that, if a hardcore predicate a・s (mod q) can be predicted with probability distinctly greater than (1/q) with respect to a uniformly random a ∈ ℤqn, then it is possible to determine s ∈ ℤqn.

  • Brain Tumor Classification using Under-Sampled k-Space Data: A Deep Learning Approach

    Tania SULTANA  Sho KUROSAKI  Yutaka JITSUMATSU  Shigehide KUHARA  Jun'ichi TAKEUCHI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/08/15
      Vol:
    E106-D No:11
      Page(s):
    1831-1841

    We assess how well the recently created MRI reconstruction technique, Multi-Resolution Convolutional Neural Network (MRCNN), performs in the core medical vision field (classification). The primary goal of MRCNN is to identify the best k-space undersampling patterns to accelerate the MRI. In this study, we use the Figshare brain tumor dataset for MRI classification with 3064 T1-weighted contrast-enhanced MRI (CE-MRI) over three categories: meningioma, glioma, and pituitary tumors. We apply MRCNN to the dataset, which is a method to reconstruct high-quality images from under-sampled k-space signals. Next, we employ the pre-trained VGG16 model, which is a Deep Neural Network (DNN) based image classifier to the MRCNN restored MRIs to classify the brain tumors. Our experiments showed that in the case of MRCNN restored data, the proposed brain tumor classifier achieved 92.79% classification accuracy for a 10% sampling rate, which is slightly higher than that of SRCNN, MoDL, and Zero-filling methods have 91.89%, 91.89%, and 90.98% respectively. Note that our classifier was trained using the dataset consisting of the images with full sampling and their labels, which can be regarded as a model of the usual human diagnostician. Hence our results would suggest MRCNN is useful for human diagnosis. In conclusion, MRCNN significantly enhances the accuracy of the brain tumor classification system based on the tumor location using under-sampled k-space signals.

  • Adaptive Zero-Padding with Impulsive Training Signal MMSE-SMI Adaptive Array Interference Suppression

    He HE  Shun KOJIMA  Kazuki MARUTA  Chang-Jun AHN  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2022/09/30
      Vol:
    E106-A No:4
      Page(s):
    674-682

    In mobile communication systems, the channel state information (CSI) is severely affected by the noise effect of the receiver. The adaptive subcarrier grouping (ASG) for sample matrix inversion (SMI) based minimum mean square error (MMSE) adaptive array has been previously proposed. Although it can reduce the additive noise effect by increasing samples to derive the array weight for co-channel interference suppression, it needs to know the signal-to-noise ratio (SNR) in advance to set the threshold for subcarrier grouping. This paper newly proposes adaptive zero padding (AZP) in the time domain to improve the weight accuracy of the SMI matrix. This method does not need to estimate the SNR in advance, and even if the threshold is always constant, it can adaptively identify the position of zero-padding to eliminate the noise interference of the received signal. Simulation results reveal that the proposed method can achieve superior bit error rate (BER) performance under various Rician K factors.

  • A New Subsample Time Delay Estimation Algorithm for LFM-Based Detection

    Cui YANG  Yalu XU  Yue YU  Gengxin NING  Xiaowu ZHU  

     
    PAPER-Ultrasonics

      Pubricized:
    2022/09/09
      Vol:
    E106-A No:3
      Page(s):
    575-581

    This paper investigated a Subsample Time delay Estimation (STE) algorithm based on the amplitude of cross-correlation function to improve the estimation accuracy. In this paper, a rough time delay estimation is applied based on traditional cross correlator, and a fine estimation is achieved by approximating the sampled cross-correlation sequence to the amplitude of the theoretical cross-correlation function for linear frequency modulation (LFM) signal. Simulation results show that the proposed algorithm outperforms existing methods and can effectively improve time delay estimation accuracy with the complexity comparable to the traditional cross-correlation method. The theoretical Cramér-Rao Bound (CRB) is derived, and simulations demonstrate that the performance of STE can approach the boundary. Eventually, four important parameters discussed in the simulation to explore the impact on Mean Squared Error (MSE).

  • Sample Selection Approach with Number of False Predictions for Learning with Noisy Labels

    Yuichiro NOMURA  Takio KURITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2022/07/21
      Vol:
    E105-D No:10
      Page(s):
    1759-1768

    In recent years, deep neural networks (DNNs) have made a significant impact on a variety of research fields and applications. One drawback of DNNs is that it requires a huge amount of dataset for training. Since it is very expensive to ask experts to label the data, many non-expert data collection methods such as web crawling have been proposed. However, dataset created by non-experts often contain corrupted labels, and DNNs trained on such dataset are unreliable. Since DNNs have an enormous number of parameters, it tends to overfit to noisy labels, resulting in poor generalization performance. This problem is called Learning with Noisy labels (LNL). Recent studies showed that DNNs are robust to the noisy labels in the early stage of learning before over-fitting to noisy labels because DNNs learn the simple patterns first. Therefore DNNs tend to output true labels for samples with noisy labels in the early stage of learning, and the number of false predictions for samples with noisy labels is higher than for samples with clean labels. Based on these observations, we propose a new sample selection approach for LNL using the number of false predictions. Our method periodically collects the records of false predictions during training, and select samples with a low number of false predictions from the recent records. Then our method iteratively performs sample selection and training a DNNs model using the updated dataset. Since the model is trained with more clean samples and records more accurate false predictions for sample selection, the generalization performance of the model gradually increases. We evaluated our method on two benchmark datasets, CIFAR-10 and CIFAR-100 with synthetically generated noisy labels, and the obtained results which are better than or comparative to the-state-of-the-art approaches.

  • Toward Realization of Scalable Packaging and Wiring for Large-Scale Superconducting Quantum Computers Open Access

    Shuhei TAMATE  Yutaka TABUCHI  Yasunobu NAKAMURA  

     
    INVITED PAPER

      Pubricized:
    2021/12/03
      Vol:
    E105-C No:6
      Page(s):
    290-295

    In this paper, we review the basic components of superconducting quantum computers. We mainly focus on the packaging and wiring technologies required to realize large-scalable superconducting quantum computers.

  • Data-Aided SMI Algorithm Using Common Correlation Matrix for Adaptive Array Interference Suppression

    Kosuke SHIMA  Kazuki MARUTA  Chang-Jun AHN  

     
    PAPER-Digital Signal Processing

      Vol:
    E104-A No:2
      Page(s):
    404-411

    This paper proposes a novel weight derivation method to improve adaptive array interference suppression performance based on our previously conceived sample matrix inversion algorithm using common correlation matrix (CCM-SMI), by data-aided approach. In recent broadband wireless communication system such as orthogonal frequency division multiplexing (OFDM) which possesses lots of subcarriers, the computation complexity is serious problem when using SMI algorithm to suppress unknown interference. To resolve this problem, CCM based SMI algorithm was previously proposed. It computes the correlation matrix by the received time domain signals before fast Fourier transform (FFT). However, due to the limited number of pilot symbols, the estimated channel state information (CSI) is often incorrect. It leads limited interference suppression performance. In this paper, we newly employ a data-aided channel state estimation. Decision results of received symbols are obtained by CCM-SMI and then fed-back to the channel estimator. It assists improving CSI estimation accuracy. Computer simulation result reveals that our proposal accomplishes better bit error rate (BER) performance in spite of the minimum pilot symbols with a slight additional computation complexity.

  • Deep Metric Learning with Triplet-Margin-Center Loss for Sketch Face Recognition

    Yujian FENG  Fei WU  Yimu JI  Xiao-Yuan JING  Jian YU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2020/08/18
      Vol:
    E103-D No:11
      Page(s):
    2394-2397

    Sketch face recognition is to match sketch face images to photo face images. The main challenge of sketch face recognition is learning discriminative feature representations to ensure intra-class compactness and inter-class separability. However, traditional sketch face recognition methods encouraged samples with the same identity to get closer, and samples with different identities to be further, and these methods did not consider the intra-class compactness of samples. In this paper, we propose triplet-margin-center loss to cope with the above problem by combining the triplet loss and center loss. The triplet-margin-center loss can enlarge the distance of inter-class samples and reduce intra-class sample variations simultaneously, and improve intra-class compactness. Moreover, the triplet-margin-center loss applies a hard triplet sample selection strategy. It aims to effectively select hard samples to avoid unstable training phase and slow converges. With our approach, the samples from photos and from sketches taken from the same identity are closer, and samples from photos and sketches come from different identities are further in the projected space. In extensive experiments and comparisons with the state-of-the-art methods, our approach achieves marked improvements in most cases.

  • A Novel Large-Angle ISAR Imaging Algorithm Based on Dynamic Scattering Model

    Ping LI  Feng ZHOU  Bo ZHAO  Maliang LIU  Huaxi GU  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2020/04/17
      Vol:
    E103-C No:10
      Page(s):
    524-532

    This paper presents a large-angle imaging algorithm based on a dynamic scattering model for inverse synthetic aperture radar (ISAR). In this way, more information can be presented in an ISAR image than an ordinary RD image. The proposed model describes the scattering characteristics of ISAR target varying with different observation angles. Based on this model, feature points in each sub-image of the ISAR targets are extracted and matched using the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) algorithms. Using these feature points, high-precision rotation angles are obtained via joint estimation, which makes it possible to achieve a large angle imaging using the back-projection algorithm. Simulation results verifies the validity of the proposed method.

  • A New Upper Bound for Finding Defective Samples in Group Testing

    Jin-Taek SEONG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2020/02/17
      Vol:
    E103-D No:5
      Page(s):
    1164-1167

    The aim of this paper is to show an upper bound for finding defective samples in a group testing framework. To this end, we exploit minimization of Hamming weights in coding theory and define probability of error for our decoding scheme. We derive a new upper bound on the probability of error. We show that both upper and lower bounds coincide with each other at an optimal density ratio of a group matrix. We conclude that as defective rate increases, a group matrix should be sparser to find defective samples with only a small number of tests.

  • Parallel Feature Network For Saliency Detection

    Zheng FANG  Tieyong CAO  Jibin YANG  Meng SUN  

     
    LETTER-Image

      Vol:
    E102-A No:2
      Page(s):
    480-485

    Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.

  • A Unified Analysis of the Signal Transfer Characteristics of a Single-Path FET-R-C Circuit Open Access

    Tetsuya IIZUKA  Asad A. ABIDI  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    432-443

    A frequently occurring subcircuit consists of a loop of a resistor (R), a field-effect transistor (FET), and a capacitor (C). The FET acts as a switch, controlled at its gate terminal by a clock voltage. This subcircuit may be acting as a sample-and-hold (S/H), as a passive mixer (P-M), or as a bandpass filter or bandpass impedance. In this work, we will present a useful analysis that leads to a simple signal flow graph (SFG), which captures the FET-R-C circuit's action completely across a wide range of design parameters. The SFG dissects the circuit into three filtering functions and ideal sampling. This greatly simplifies analysis of frequency response, noise, input impedance, and conversion gain, and leads to guidelines for optimum design. This paper focuses on the analysis of a single-path FET-R-C circuit's signal transfer characteristics including the reconstruction of the complete waveform from the discrete-time sampled voltage.

  • Exponential Neighborhood Preserving Embedding for Face Recognition

    Ruisheng RAN  Bin FANG  Xuegang WU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/01/23
      Vol:
    E101-D No:5
      Page(s):
    1410-1420

    Neighborhood preserving embedding is a widely used manifold reduced dimensionality technique. But NPE has to encounter two problems. One problem is that it suffers from the small-sample-size (SSS) problem. Another is that the performance of NPE is seriously sensitive to the neighborhood size k. To overcome the two problems, an exponential neighborhood preserving embedding (ENPE) is proposed in this paper. The main idea of ENPE is that the matrix exponential is introduced to NPE, then the SSS problem is avoided and low sensitivity to the neighborhood size k is gotten. The experiments are conducted on ORL, Georgia Tech and AR face database. The results show that, ENPE shows advantageous performance over other unsupervised methods, such as PCA, LPP, ELPP and NPE. Another is that ENPE is much less sensitive to the neighborhood parameter k contrasted with the unsupervised manifold learning methods LPP, ELPP and NPE.

  • Ripple-Free Dual-Rate Control with Two-Degree-of-Freedom Integrator

    Takao SATO  Akira YANOU  Shiro MASUDA  

     
    PAPER-Systems and Control

      Vol:
    E101-A No:2
      Page(s):
    460-466

    A ripple-free dual-rate control system is designed for a single-input single-output dual-rate system, in which the sampling interval of a plant output is longer than the holding interval of a control input. The dual-rate system is converged to a multi-input single-output single-rate system using the lifting technique, and a control system is designed based on an error system using the steady-state variable. Because the proposed control law is designed so that the control input is constant in the steady state, the intersample output as well as the sampled output converges to the set-point without both steady-state error and intersample ripples when there is neither modeling nor disturbance. Furthermore, in the proposed method, a two-degree-of-freedom integral compensation is designed, and hence, the transient response is not deteriorated by the integral action because the integral action is canceled when there is neither modeling nor disturbance. Moreover, in the presence of the modeling error or disturbance, the integral compensation is revealed, and hence, the steady-state error is eliminated on both the intersample and sampled response.

  • A Simple and Effective Generalization of Exponential Matrix Discriminant Analysis and Its Application to Face Recognition

    Ruisheng RAN  Bin FANG  Xuegang WU  Shougui ZHANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2017/10/18
      Vol:
    E101-D No:1
      Page(s):
    265-268

    As an effective method, exponential discriminant analysis (EDA) has been proposed and widely used to solve the so-called small-sample-size (SSS) problem. In this paper, a simple and effective generalization of EDA is presented and named as GEDA. In GEDA, a general exponential function, where the base of exponential function is larger than the Euler number, is used. Due to the property of general exponential function, the distance between samples belonging to different classes is larger than that of EDA, and then the discrimination property is largely emphasized. The experiment results on the Extended Yale and CMU-PIE face databases show that, GEDA gets more advantageous recognition performance compared to EDA.

  • Weighted Voting of Discriminative Regions for Face Recognition

    Wenming YANG  Riqiang GAO  Qingmin LIAO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/08/04
      Vol:
    E100-D No:11
      Page(s):
    2734-2737

    This paper presents a strategy, Weighted Voting of Discriminative Regions (WVDR), to improve the face recognition performance, especially in Small Sample Size (SSS) and occlusion situations. In WVDR, we extract the discriminative regions according to facial key points and abandon the rest parts. Considering different regions of face make different contributions to recognition, we assign weights to regions for weighted voting. We construct a decision dictionary according to the recognition results of selected regions in the training phase, and this dictionary is used in a self-defined loss function to obtain weights. The final identity of test sample is the weighted voting of selected regions. In this paper, we combine the WVDR strategy with CRC and SRC separately, and extensive experiments show that our method outperforms the baseline and some representative algorithms.

  • Performance Analysis of the Generalized Sidelobe Canceller in Finite Sample Size and Correlative Interference Situations

    Xu WANG  Julan XIE  Zishu HE  Qi ZHANG  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2358-2369

    In the scenario of finite sample size, the performance of the generalized sidelobe canceller (GSC) is still affected by the desired signal even if all signal sources are independent with each other. Firstly, the novel expression of weight vector of the auxiliary array is derived under the circumstances of finite sample size. Utilizing this new weight vector and considering the correlative interferences, the general expression for the interference cancellation ratio (CR) is developed. Then, the impacts of the CR performance are further analyzed for the parameters including the input signal-to-noise ratio (SNR), the auxiliary array size, the correlation coefficient between the desired signal and interference as well as the snapshots of the sample data, respectively. Some guidelines can thus be given for the practical application. Numerical simulations demonstrate the agreement between the simulation results and the analytical results.

  • Particle Filter Target Tracking Algorithm Based on Dynamic Niche Genetic Algorithm

    Weicheng XIE  Junxu WEI  Zhichao CHEN  Tianqian LI  

     
    PAPER-Vision

      Vol:
    E100-A No:6
      Page(s):
    1325-1332

    Particle filter algorithm is an important algorithm in the field of target tracking. however, this algorithm faces the problem of sample impoverishment which is caused by the introduction of re-sampling and easily affected by illumination variation. This problem seriously affects the tracking performance of a particle filter algorithm. To solve this problem, we introduce a particle filter target tracking algorithm based on a dynamic niche genetic algorithm. The application of this dynamic niche genetic algorithm to re-sampling ensures particle diversity and dynamically fuses the color and profile features of the target in order to increase the algorithm accuracy under the illumination variation. According to the test results, the proposed algorithm accurately tracks the target, significantly increases the number of particles, enhances the particle diversity, and exhibits better robustness and better accuracy.

  • Sub-1-V CMOS-Based Electrophoresis Using Electroless Gold Plating for Small-Form-Factor Biomolecule Manipulation

    Yuuki YAMAJI  Kazuo NAKAZATO  Kiichi NIITSU  

     
    BRIEF PAPER

      Vol:
    E100-C No:6
      Page(s):
    592-596

    In this paper, we present sub-1-V CMOS-based electrophoresis method for small-form-factor biomolecule manipulation that is contained in a microchip. This is the first time this type of device has been presented in the literature. By combining CMOS technology with electroless gold plating, the electrode pitch can be reduced and the required input voltage can be decreased to less than 1 V. We fabricated the CMOS electrophoresis chip in a cost-competitive 0.6 µm standard CMOS process. A sample/hold circuit in each cell is used to generate a constant output from an analog input. After forming gold electrodes using an electroless gold plating technique, we were able to manipulate red food coloring with a 0-0.7 V input voltage range. The results shows that the proposed CMOS chip is effective for electrophoresis-based manipulation.

  • Advances in Analog-to-Digital Converters over the Last Decade

    Sanroku TSUKAMOTO  

     
    INVITED PAPER

      Vol:
    E100-A No:2
      Page(s):
    524-533

    As the scaling of CMOS technology advances, the characteristics of transistors are evolving toward digital circuit design. This means conventional analog design techniques are getting harder to apply to advanced technology, because of the low power supply voltage, narrow dynamic range of switching properties, and low trans-conductance of transistors. Despite such circumstances, analog-to-digital converter (ADC) performance is still advancing, thanks to innovative new architectures. This paper reviews the recent trend of ADCs, exploring their performance as well as use of the time interleave scheme, non-static current amplifiers, and hybrid architectures.

1-20hit(109hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.