Keyword Search Result

[Keyword] EV(2636hit)

221-240hit(2636hit)

  • Digital Watermarking Method for Printed Matters Using Deep Learning for Detecting Watermarked Areas

    Hiroyuki IMAGAWA  Motoi IWATA  Koichi KISE  

     
    PAPER

      Pubricized:
    2020/10/07
      Vol:
    E104-D No:1
      Page(s):
    34-42

    There are some technologies like QR codes to obtain digital information from printed matters. Digital watermarking is one of such techniques. Compared with other techniques, digital watermarking is suitable for adding information to images without spoiling their design. For such purposes, digital watermarking methods for printed matters using detection markers or image registration techniques for detecting watermarked areas are proposed. However, the detection markers themselves can damage the appearance such that the advantages of digital watermarking, which do not lose design, are not fully utilized. On the other hand, methods using image registration techniques are not able to work for non-registered images. In this paper, we propose a novel digital watermarking method using deep learning for the detection of watermarked areas instead of using detection markers or image registration. The proposed method introduces a semantic segmentation based on deep learning model for detecting watermarked areas from printed matters. We prepare two datasets for training the deep learning model. One is constituted of geometrically transformed non-watermarked and watermarked images. The number of images in this dataset is relatively large because the images can be generated based on image processing. This dataset is used for pre-training. The other is obtained from actually taken photographs including non-watermarked or watermarked printed matters. The number of this dataset is relatively small because taking the photographs requires a lot of effort and time. However, the existence of pre-training allows a fewer training images. This dataset is used for fine-tuning to improve robustness for print-cam attacks. In the experiments, we investigated the performance of our method by implementing it on smartphones. The experimental results show that our method can carry 96 bits of information with watermarked printed matters.

  • Salient Chromagram Extraction Based on Trend Removal for Cover Song Identification

    Jin S. SEO  

     
    LETTER

      Pubricized:
    2020/10/19
      Vol:
    E104-D No:1
      Page(s):
    51-54

    This paper proposes a salient chromagram by removing local trend to improve cover song identification accuracy. The proposed salient chromagram emphasizes tonal contents of music, which are well-preserved between an original song and its cover version, while reducing the effects of timber difference. We apply the proposed salient chromagram to the sequence-alignment based cover song identification. Experiments on two cover song datasets confirm that the proposed salient chromagram improves the cover song identification accuracy.

  • Faster Rotation-Based Gauss Sieve for Solving the SVP on General Ideal Lattices Open Access

    Shintaro NARISADA  Hiroki OKADA  Kazuhide FUKUSHIMA  Shinsaku KIYOMOTO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    79-88

    The hardness in solving the shortest vector problem (SVP) is a fundamental assumption for the security of lattice-based cryptographic algorithms. In 2010, Micciancio and Voulgaris proposed an algorithm named the Gauss Sieve, which is a fast and heuristic algorithm for solving the SVP. Schneider presented another algorithm named the Ideal Gauss Sieve in 2011, which is applicable to a special class of lattices, called ideal lattices. The Ideal Gauss Sieve speeds up the Gauss Sieve by using some properties of the ideal lattices. However, the algorithm is applicable only if the dimension of the ideal lattice n is a power of two or n+1 is a prime. Ishiguro et al. proposed an extension to the Ideal Gauss Sieve algorithm in 2014, which is applicable only if the prime factor of n is 2 or 3. In this paper, we first generalize the dimensions that can be applied to the ideal lattice properties to when the prime factor of n is derived from 2, p or q for two primes p and q. To the best of our knowledge, no algorithm using ideal lattice properties has been proposed so far with dimensions such as: 20, 44, 80, 84, and 92. Then we present an algorithm that speeds up the Gauss Sieve for these dimensions. Our experiments show that our proposed algorithm is 10 times faster than the original Gauss Sieve in solving an 80-dimensional SVP problem. Moreover, we propose a rotation-based Gauss Sieve that is approximately 1.5 times faster than the Ideal Gauss Sieve.

  • Target-Oriented Deformation of Visual-Semantic Embedding Space

    Takashi MATSUBARA  

     
    PAPER

      Pubricized:
    2020/09/24
      Vol:
    E104-D No:1
      Page(s):
    24-33

    Multimodal embedding is a crucial research topic for cross-modal understanding, data mining, and translation. Many studies have attempted to extract representations from given entities and align them in a shared embedding space. However, because entities in different modalities exhibit different abstraction levels and modality-specific information, it is insufficient to embed related entities close to each other. In this study, we propose the Target-Oriented Deformation Network (TOD-Net), a novel module that continuously deforms the embedding space into a new space under a given condition, thereby providing conditional similarities between entities. Unlike methods based on cross-modal attention applied to words and cropped images, TOD-Net is a post-process applied to the embedding space learned by existing embedding systems and improves their performances of retrieval. In particular, when combined with cutting-edge models, TOD-Net gains the state-of-the-art image-caption retrieval model associated with the MS COCO and Flickr30k datasets. Qualitative analysis reveals that TOD-Net successfully emphasizes entity-specific concepts and retrieves diverse targets via handling higher levels of diversity than existing models.

  • Quantitative Evaluation of Software Component Behavior Discovery Approach

    Cong LIU  

     
    LETTER

      Pubricized:
    2020/05/21
      Vol:
    E104-D No:1
      Page(s):
    117-120

    During the execution of software systems, their execution data can be recorded. By fully exploiting these data, software practitioners can discover behavioral models describing the actual execution of the underlying software system. The recorded unstructured software execution data may be too complex, spanning over several days, etc. Applying existing discovery techniques results in spaghetti-like models with no clear structure and no valuable information for comprehension. Starting from the observation that a software system is composed of a set of logical components, Liu et al. propose to decompose the software behavior discovery problem into smaller independent ones by discovering a behavioral model per component in [1]. However, the effectiveness of the proposed approach is not fully evaluated and compared with existing approaches. In this paper, we evaluate the quality (in terms of understandability/complexity) of discovered component behavior models in a quantitative manner. Based on evaluation, we show that this approach can reduce the complexity of the discovered model and gives a better understanding.

  • Relationship between Code Reading Speed and Programmers' Age

    Yukasa MURAKAMI  Masateru TSUNODA  Masahide NAKAMURA  

     
    LETTER

      Pubricized:
    2020/09/17
      Vol:
    E104-D No:1
      Page(s):
    121-125

    According to the aging society, it is getting more important for software industry to secure human resources including senior developers. To enhance the performance of senior developers, we should clarify the strengths and weaknesses of senior developers, and based on that, we should reconsider software engineering education and development support tools. To a greater or lesser extent, many cognitive abilities would be affected by aging, and we focus on the human memory as one of such abilities. We performed preliminary analysis based on the assumption. In the preliminary experiment, we prepared programs in which the influence of human memory performance (i.e., the number of variables remembered in the short-term memory) on reading speed is different, and measured time for subjects to understand the programs. As a result, we observed that the code reading speed of senior subjects was slow, when they read programs in which the influence of human memory performance is larger.

  • A Collaborative Framework Supporting Ontology Development Based on Agile and Scrum Model

    Akkharawoot TAKHOM  Sasiporn USANAVASIN  Thepchai SUPNITHI  Prachya BOONKWAN  

     
    PAPER-Software Engineering

      Pubricized:
    2020/09/04
      Vol:
    E103-D No:12
      Page(s):
    2568-2577

    Ontology describes concepts and relations in a specific domain-knowledge that are important for knowledge representation and knowledge sharing. In the past few years, several tools have been introduced for ontology modeling and editing. To design and develop an ontology is one of the challenge tasks and its challenges are quite similar to software development as it requires many collaborative activities from many stakeholders (e.g. domain experts, knowledge engineers, application users, etc.) through the development cycle. Most of the existing tools do not provide collaborative feature to support stakeholders to collaborate work more effectively. In addition, there are lacking of standard process adoption for ontology development task. Thus, in this work, we incorporated ontology development process into Scrum process as used for process standard in software engineering. Based on Scrum, we can perform standard agile development of ontology that can reduce the development cycle as well as it can be responding to any changes better and faster. To support this idea, we proposed a Scrum Ontology Development Framework, which is an online collaborative framework for agile ontology design and development. Each ontology development process based on Scrum model will be supported by different services in our framework, aiming to promote collaborative activities among different roles of stakeholders. In addition to services such as ontology visualized modeling and editing, we also provide three more important features such as 1) concept/relation misunderstanding diagnosis, 2) cross-domain concept detection and 3) concept classification. All these features allow stakeholders to share their understanding and collaboratively discuss to improve quality of domain ontologies through a community consensus.

  • Design and Implementation of Personalized Integrated Broadcast — Broadband Service in Terrestrial Networks

    Nayeon KIM  Woongsoo NA  Byungjun BAE  

     
    LETTER-Systems and Control

      Vol:
    E103-A No:12
      Page(s):
    1621-1623

    This article proposes a dynamic linkage service which is a specific service model of integrated broadcast — broadband services based ATSC 3.0. The dynamic linkage service is useful to the viewer who wants to continue watching programs using TV or their personal devices, even after the terrestrial broadcast ends due to the start of the next regular programming. In addition, we verify the feasibility of the proposed extended dynamic linkage service through developed emulation system based on ATSC 3.0. In consideration of the personal network capabilities of the viewer environment, the service was tested with 4K/2K Ultra HD and receiving the service was finished within 4 second over intranet.

  • High-Performance and Hardware-Efficient Odd-Even Based Merge Sorter

    Elsayed A. ELSAYED  Kenji KISE  

     
    PAPER-Computer System

      Pubricized:
    2020/08/13
      Vol:
    E103-D No:12
      Page(s):
    2504-2517

    Data sorting is an important operation in computer science. It is extensively used in several applications such as database and searching. While high-performance sorting accelerators are in demand, it is very important to pay attention to the hardware resources for such kind of high-performance sorters. In this paper, we propose three FPGA based architectures to accelerate sorting operation based on the merge sorting algorithm. We call our proposals as WMS: Wide Merge Sorter, EHMS: Efficient Hardware Merge Sorter, and EHMSP: Efficient Hardware Merge Sorter Plus. We target the Virtex UltraScale FPGA device. Evaluation results show that our proposed merge sorters maintain both the high-performance and cost-effective properties. While using much fewer hardware resources, our proposed merge sorters achieve higher performance compared to the state-of-the-art. For instance, with 256 sorted records are produced per cycle, implementation results of proposed EHMS show a significant reduction in the required number of Flip Flops (FFs) and Look-Up Tables (LUTs) to about 66% and 79%, respectively over the state-of-the-art merge sorter. Moreover, while requiring fewer hardware resources, EHMS achieves about 1.4x higher throughput than the state-of-the-art merge sorter. For the same number of produced records, proposed WMS also achieves about 1.6x throughput improvement over the state-of-the-art while requiring about 81% of FFs and 76% of LUTs needed by the state-of-the-art sorter.

  • A Reversible Data Hiding Method in Compressible Encrypted Images

    Shoko IMAIZUMI  Yusuke IZAWA  Ryoichi HIRASAWA  Hitoshi KIYA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E103-A No:12
      Page(s):
    1579-1588

    We propose a reversible data hiding (RDH) method in compressible encrypted images called the encryption-then-compression (EtC) images. The proposed method allows us to not only embed a payload in encrypted images but also compress the encrypted images containing the payload. In addition, the proposed RDH method can be applied to both plain images and encrypted ones, and the payload can be extracted flexibly in the encrypted domain or from the decrypted images. Various RDH methods have been studied in the encrypted domain, but they are not considered to be two-domain data hiding, and the resultant images cannot be compressed by using image coding standards, such as JPEG-LS and JPEG 2000. In our experiment, the proposed method shows high performance in terms of lossless compression efficiency by using JPEG-LS and JPEG 2000, data hiding capacity, and marked image quality.

  • An Optimal Power Allocation Scheme for Device-to-Device Communications in a Cellular OFDM System

    Gil-Mo KANG  Cheolsoo PARK  Oh-Soon SHIN  

     
    LETTER-Communication Theory and Signals

      Pubricized:
    2020/06/02
      Vol:
    E103-A No:12
      Page(s):
    1670-1673

    We propose an optimal power allocation scheme that maximizes the transmission rate of device-to-device (D2D) communications underlaying a cellular system based on orthogonal frequency division multiplexing (OFDM). The proposed algorithm first calculates the maximum allowed transmission power of a D2D transmitter to restrict the interference caused to a cellular link that share the same OFDM subchannels with the D2D link. Then, with a constraint on the maximum transmit power, an optimization of water-filling type is performed to find the optimal transmit power allocation across subchannels and within each subchannel. The performance of the proposed power allocation scheme is evaluated in terms of the average achievable rate of the D2D link.

  • Heterogeneous-Graph-Based Video Search Reranking Using Topic Relevance

    Soh YOSHIDA  Mitsuji MUNEYASU  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER-Vision

      Vol:
    E103-A No:12
      Page(s):
    1529-1540

    In this paper, we address the problem of analyzing topics, included in a social video group, to improve the retrieval performance of videos. Unlike previous methods that focused on an individual visual aspect of videos, the proposed method aims to leverage the “mutual reinforcement” of heterogeneous modalities such as tags and users associated with video on the Internet. To represent multiple types of relationships between each heterogeneous modality, the proposed method constructs three subgraphs: user-tag, video-video, and video-tag graphs. We combine the three types of graphs to obtain a heterogeneous graph. Then the extraction of latent features, i.e., topics, becomes feasible by applying graph-based soft clustering to the heterogeneous graph. By estimating the membership of each grouped cluster for each video, the proposed method defines a new video similarity measure. Since the understanding of video content is enhanced by exploiting latent features obtained from different types of data that complement each other, the performance of visual reranking is improved by the proposed method. Results of experiments on a video dataset that consists of YouTube-8M videos show the effectiveness of the proposed method, which achieves a 24.3% improvement in terms of the mean normalized discounted cumulative gain in a search ranking task compared with the baseline method.

  • Inpainting via Sparse Representation Based on a Phaseless Quality Metric

    Takahiro OGAWA  Keisuke MAEDA  Miki HASEYAMA  

     
    PAPER-Image

      Vol:
    E103-A No:12
      Page(s):
    1541-1551

    An inpainting method via sparse representation based on a new phaseless quality metric is presented in this paper. Since power spectra, phaseless features, of local regions within images enable more successful representation of their texture characteristics compared to their pixel values, a new quality metric based on these phaseless features is newly derived for image representation. Specifically, the proposed method enables spare representation of target signals, i.e., target patches, including missing intensities by monitoring errors converged by phase retrieval as the novel phaseless quality metric. This is the main contribution of our study. In this approach, the phase retrieval algorithm used in our method has the following two important roles: (1) derivation of the new quality metric that can be derived even for images including missing intensities and (2) conversion of phaseless features, i.e., power spectra, to pixel values, i.e., intensities. Therefore, the above novel approach solves the existing problem of not being able to use better features or better quality metrics for inpainting. Results of experiments showed that the proposed method using sparse representation based on the new phaseless quality metric outperforms previously reported methods that directly use pixel values for inpainting.

  • A Novel Quantitative Evaluation Index of Contrast Improvement for Dichromats

    Xi CHENG  Go TANAKA  

     
    LETTER-Image

      Vol:
    E103-A No:12
      Page(s):
    1618-1620

    In this letter, a quantitative evaluation index of contrast improvement of color images for dichromats is proposed. The index is made by adding two parameters to an existing index to make evaluation results consistent with human evaluation results. The effectiveness and validity of the proposed index are verified by experiments.

  • Novel Multi-Objective Design Approach for Cantilever of Relay Contact Using Preference Set-Based Design Method

    Yoshiki KAYANO  Kazuaki MIYANAGA  Hiroshi INOUE  

     
    BRIEF PAPER

      Pubricized:
    2020/07/03
      Vol:
    E103-C No:12
      Page(s):
    713-717

    In the design of electrical contacts, it is required to pursue a solution which satisfies simultaneously multi-objective (electrical, mechanical, and thermal) performances including conflicting requirements. Preference Set-Based Design (PSD) has been proposed as practical procedure of the fuzzy set-based design method. This brief paper newly attempts to propose a concurrent design method by PSD to electrical contact, specifically a design of a shape of cantilever in relay contacts. In order to reduce the calculation (and/or experimental) cost, this paper newly attempt to apply Design of Experiments (DoE) for meta-modeling to PSD. The number of the calculation for the meta-modeling can be reduced to $ rac{1}{729}$ by using DoE. The design parameters (width and length) of a cantilever for drive an electrical contact, which satisfy required performance (target deflection), are obtained in ranges successfully by PSD. The validity of the design parameters is demonstrated by numerical modeling.

  • A Multiobjective Optimization Dispatch Method of Wind-Thermal Power System

    Xiaoxuan GUO  Renxi GONG  Haibo BAO  Zhenkun LU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2020/09/18
      Vol:
    E103-D No:12
      Page(s):
    2549-2558

    It is well known that the large-scale access of wind power to the power system will affect the economic and environmental objectives of power generation scheduling, and also bring new challenges to the traditional deterministic power generation scheduling because of the intermittency and randomness of wind power. In order to deal with these problems, a multiobjective optimization dispatch method of wind-thermal power system is proposed. The method can be described as follows: A multiobjective interval power generation scheduling model of wind-thermal power system is firstly established by describing the wind speed on wind farm as an interval variable, and the minimization of fuel cost and pollution gas emission cost of thermal power unit is chosen as the objective functions. And then, the optimistic and pessimistic Pareto frontiers of the multi-objective interval power generation scheduling are obtained by utilizing an improved normal boundary intersection method with a normal boundary intersection (NBI) combining with a bilevel optimization method to solve the model. Finally, the optimistic and pessimistic compromise solutions is determined by a distance evaluation method. The calculation results of the 16-unit 174-bus system show that by the proposed method, a uniform optimistic and pessimistic Pareto frontier can be obtained, the analysis of the impact of wind speed interval uncertainty on the economic and environmental indicators can be quantified. In addition, it has been verified that the Pareto front in the actual scenario is distributed between the optimistic and pessimistic Pareto front, and the influence of different wind power access levels on the optimistic and pessimistic Pareto fronts is analyzed.

  • Analysis of Decoding Error Probability of Spatially “Mt. Fuji” Coupled LDPC Codes in Waterfall Region of the BEC

    Yuta NAKAHARA  Toshiyasu MATSUSHIMA  

     
    PAPER-Coding Theory

      Vol:
    E103-A No:12
      Page(s):
    1337-1346

    A spatially “Mt. Fuji” coupled (SFC) low-density parity-check (LDPC) ensemble is a modified version of the spatially coupled (SC) LDPC ensemble. Its decoding error probability in the waterfall region has been studied only in an experimental manner. In this paper, we theoretically analyze it over the binary erasure channel by modifying the expected graph evolution (EGE) and covariance evolution (CE) that have been used to analyze the original SC-LDPC ensemble. In particular, we derive the initial condition modified for the SFC-LDPC ensemble. Then, unlike the SC-LDPC ensemble, the SFC-LDPC ensemble has a local minimum on the solution of the EGE and CE. Considering the property of it, we theoretically expect the waterfall curve of the SFC-LDPC ensemble is steeper than that of the SC-LDPC ensemble. In addition, we also confirm it by numerical experiments.

  • High Level Congestion Detection from C/C++ Source Code for High Level Synthesis Open Access

    Masato TATSUOKA  Mineo KANEKO  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1437-1446

    High level synthesis (HLS) is a source-code-driven Register Transfer Level (RTL) design tool, and the performance, the power consumption, and the area of a generated RTL are limited partly by the description of a HLS input source code. In order to break through such kind of limitation and to get a further optimized RTL, the optimization of the input source code is indispensable. Routing congestion is one of such problems we need to consider the refinement of a HLS input source code. In this paper, we propose a novel HLS flow that performs code improvements by detecting congested parts directly from HLS input source code without using physical logic synthesis, and regenerating the input source code for HLS. In our approach, the origin of the wire congestion is detected from the HLS input source code by applying pattern matching on Program-Dependence Graph (PDG) constructed from the HLS input source code, the possibility of wire congestion is reported.

  • A Study on Optimal Design of Optical Devices Utilizing Coupled Mode Theory and Machine Learning

    Koji KUDO  Keita MORIMOTO  Akito IGUCHI  Yasuhide TSUJI  

     
    PAPER

      Pubricized:
    2020/03/25
      Vol:
    E103-C No:11
      Page(s):
    552-559

    We propose a new design approach to improve the computational efficiency of an optimal design of optical waveguide devices utilizing coupled mode theory (CMT) and a neural network (NN). Recently, the NN has begun to be used for efficient optimal design of optical devices. In this paper, the eigenmode analysis required in the CMT is skipped by using the NN, and optimization with an evolutionary algorithm can be efficiently carried out. To verify usefulness of our approach, optimal design examples of a wavelength insensitive 3dB coupler, a 1 : 2 power splitter, and a wavelength demultiplexer are shown and their transmission properties obtained by the CMT with the NN (NN-CMT) are verified by comparing with those calculated by a finite element beam propagation method (FE-BPM).

  • Design for Long-Reach Coexisting PON Considering Subscriber Distribution with Wavelength Selective Asymmetrical Splitters

    Kazutaka HARA  Atsuko KAWAKITA  Yasutaka KIMURA  Yasuhiro SUZUKI  Satoshi IKEDA  Kohji TSUJI  

     
    PAPER

      Pubricized:
    2020/06/08
      Vol:
    E103-B No:11
      Page(s):
    1249-1256

    A long-reach coexisting PON system (1G/10G-EPON, video, and TWDM-PON) that uses the Wavelength Selective-Asymmetrical optical SPlitter (WS-ASP) without any active devices like optical amplifiers is proposed. The proposal can take into account the subscriber distribution in an access network and provide specific services in specific areas by varying the splitting ratios and the branch structure in the optical splitter. Simulations confirm the key features of WS-ASP, its novel process for deriving the splitting-ratios and greater transmission distance than possible with symmetrical splitters. Experiments on a prototype system demonstrate how wavelengths can be assigned to specific areas and optical link budget enhancement. For 1G-EPON systems, the prototype system with splitting-ratio of 60% attains the optical link budget enhancement of 4.2dB compared with conventional symmetrical optical splitters. The same prototype offers the optical link budget enhancement of 4.0dB at the bit rate of 10G-EPON systems. The values measured in the experiment agree well with the simulation results with respect to the transmission distance.

221-240hit(2636hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.