Author Search Result

[Author] Yu XIANG(3hit)

1-3hit
  • Construction of Frequency-Hopping/Time-Spreading Two-Dimensional Optical Codes Using Quadratic and Cubic Congruence Code

    Chongfu ZHANG  Kun QIU  Yu XIANG  Hua XIAO  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E94-B No:7
      Page(s):
    1883-1891

    Quadratic congruence code (QCC)-based frequency-hopping and time-spreading (FH/TS) optical orthogonal codes (OOCs), and the corresponding expanded cardinality were recently studied to improve data throughput and code capacity. In this paper, we propose a new FH/TS two-dimensional (2-D) code using the QCC and the cubic congruence code (CCC), named as the QCC/CCC 2-D code. Additionally the expanded CCC-based 2D codes are also considered. In contrast to the conventional QCC-based 1-D and QCC-based FH/TS 2-D optical codes, our analysis indicates that the code capacity of the CCC-based 1-D and CCC-based FH/TS 2-D codes can be improved with the same code weight and length, respectively.

  • A Unified Neural Network for Quality Estimation of Machine Translation

    Maoxi LI  Qingyu XIANG  Zhiming CHEN  Mingwen WANG  

     
    LETTER-Natural Language Processing

      Pubricized:
    2018/06/18
      Vol:
    E101-D No:9
      Page(s):
    2417-2421

    The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.

  • Multi-Resolution State Roadmap Method for Trajectory Planning

    Yuichi TAZAKI  Jingyu XIANG  Tatsuya SUZUKI  Blaine LEVEDAHL  

     
    PAPER-Mathematical Systems Science

      Vol:
    E99-A No:5
      Page(s):
    954-962

    This research develops a method for trajectory planning of robotic systems with differential constraints based on hierarchical partitioning of a continuous state space. Unlike conventional roadmaps which is constructed in the configuration space, the proposed state roadmap also includes additional state information, such as velocity and orientation. A bounded domain of the additional state is partitioned into sub-intervals with multiple resolution levels. Each node of a state roadmap consists of a fixed position and an interval of additional state values. A valid transition is defined between a pair of nodes if any combination of additional states, within their respective intervals, produces a trajectory that satisfies a set of safety constraints. In this manner, a trajectory connecting arbitrary start and goal states subject to safety constraints can be obtained by applying a graph search technique on the state roadmap. The hierarchical nature of the state roadmap reduces the computational cost of roadmap construction, the required storage size of computed roadmaps, as well as the computational cost of path planning. The state roadmap method is evaluated in the trajectory planning examples of an omni-directional mobile robot and a car-like robot with collision avoidance and various types of constraints.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.