Author Search Result

[Author] Shunsuke OKURA(6hit)

1-6hit
  • A 10-bit 800-Column Low-Power RAM Bank Including Energy-Efficient D-Flip-Flops for a Column-Parallel ADC

    Shunsuke OKURA  Tetsuro OKURA  Bogoda A. INDIKA U.K.  Kenji TANIGUCHI  

     
    PAPER

      Vol:
    E90-A No:2
      Page(s):
    358-364

    This paper describes the design of a random access memory (RAM) bank with a 0.35-µm CMOS process for column-parallel analog/digital converters (ADC) utilized in CMOS imagers. A dynamic latch is utilized that expends neither input DC nor drain current during the monitoring phase. Accuracy analysis of analog/digital conversion error in the RAM bank is discussed to ensure low power consumption of a counter buffer circuit. Moreover, the counter buffer utilizes a combination of NMOS and CMOS buffers to reduce power consumption. Total power consumption of a 10-bit 800-column 40 MHz RAM bank is 2.9 mA for use in an imager.

  • Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models Open Access

    Tatsuya OYAMA  Shunsuke OKURA  Kota YOSHIDA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    336-343

    A backdoor attack is a type of attack method inducing deep neural network (DNN) misclassification. An adversary mixes poison data, which consist of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies only images with adversarial marks into an adversarial target class and other images into the correct classes. However, the attack performance degrades sharply when the location of the adversarial marks is slightly shifted. An adversarial mark that induces the misclassification of a DNN is usually applied when a picture is taken, so the backdoor attack will have difficulty succeeding in the physical world because the adversarial mark position fluctuates. This paper proposes a new approach in which an adversarial mark is applied using fault injection on the mobile industry processor interface (MIPI) between an image sensor and the image recognition processor. Two independent attack drivers are electrically connected to the MIPI data lane in our attack system. While almost all image signals are transferred from the sensor to the processor without tampering by canceling the attack signal between the two drivers, the adversarial mark is injected into a given location of the image signal by activating the attack signal generated by the two attack drivers. In an experiment, the DNN was implemented on a Raspberry pi 4 to classify MNIST handwritten images transferred from the image sensor over the MIPI. The adversarial mark successfully appeared in a specific small part of the MNIST images using our attack system. The success rate of the backdoor attack using this adversarial mark was 91%, which is much higher than the 18% rate achieved using conventional input image tampering.

  • Model Reverse-Engineering Attack against Systolic-Array-Based DNN Accelerator Using Correlation Power Analysis Open Access

    Kota YOSHIDA  Mitsuru SHIOZAKI  Shunsuke OKURA  Takaya KUBOTA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    152-161

    A model extraction attack is a security issue in deep neural networks (DNNs). Information on a trained DNN model is an attractive target for an adversary not only in terms of intellectual property but also of security. Thus, an adversary tries to reveal the sensitive information contained in the trained DNN model from machine-learning services. Previous studies on model extraction attacks assumed that the victim provides a machine-learning cloud service and the adversary accesses the service through formal queries. However, when a DNN model is implemented on an edge device, adversaries can physically access the device and try to reveal the sensitive information contained in the implemented DNN model. We call these physical model extraction attacks model reverse-engineering (MRE) attacks to distinguish them from attacks on cloud services. Power side-channel analyses are often used in MRE attacks to reveal the internal operation from power consumption or electromagnetic leakage. Previous studies, including ours, evaluated MRE attacks against several types of DNN processors with power side-channel analyses. In this paper, information leakage from a systolic array which is used for the matrix multiplication unit in the DNN processors is evaluated. We utilized correlation power analysis (CPA) for the MRE attack and reveal weight parameters of a DNN model from the systolic array. Two types of the systolic array were implemented on field-programmable gate array (FPGA) to demonstrate that CPA reveals weight parameters from those systolic arrays. In addition, we applied an extended analysis approach called “chain CPA” for robust CPA analysis against the systolic arrays. Our experimental results indicate that an adversary can reveal trained model parameters from a DNN accelerator even if the DNN model parameters in the off-chip bus are protected with data encryption. Countermeasures against side-channel leaks will be important for implementing a DNN accelerator on a FPGA or application-specific integrated circuit (ASIC).

  • An Area-Efficient, Low-Power CMOS Fractional Bandgap Reference

    Indika U. K. BOGODA APPUHAMYLAGE  Shunsuke OKURA  Toru IDO  Kenji TANIGUCHI  

     
    PAPER

      Vol:
    E94-C No:6
      Page(s):
    960-967

    This paper proposes an area efficient, low power, fractional CMOS bandgap reference (BGR) utilizing switched-current and current-memory techniques. The proposed circuit uses only one parasitic bipolar transistor and built-in current source to generate reference voltage. Therefore significant area and power reduction is achieved, and bipolar transistor device mismatch is eliminated. In addition, output reference voltage can be set to almost any value. The proposed circuit is designed and simulated in 0.18 µm CMOS process, and simulation results are presented. With a 1.6 V supply, the reference produces an output of about 628.5 mV, and simulated results show that the temperature coefficient of output is less than 13.8 ppm/ in the temperature range from 0 to 100. The average current consumption is about 8.5 µA in the above temperature range. The core circuit, including current source, opamp, current mirrors and switched capacitor filters, occupies less than 0.0064 mm2 (80 µm×80 µm).

  • Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface

    Tatsuya OYAMA  Kota YOSHIDA  Shunsuke OKURA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2023/09/26
      Vol:
    E107-A No:3
      Page(s):
    344-354

    Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.

  • A Reference Voltage Buffer with Settling Boost Technique for a 12 bit 18 MHz Multibit/Stage Pipelined A/D Converter

    Shunsuke OKURA  Tetsuro OKURA  Toru IDO  Kenji TANIGUCHI  

     
    PAPER

      Vol:
    E92-A No:2
      Page(s):
    367-373

    A reference voltage buffer for a multibit/stage pipelined ADC is described, where a settling boost technique is used to improve the settling response of the pipelined stages. A 12 bit 18 MHz pipelined ADC with the buffer is designed and simulated based on a 0.35 µm CMOS process. According to simulation results, the power consumed by the reference voltage buffer is reduced by 33% compared to that without the settling boost technique.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.