Author Search Result

[Author] Takafumi HIKICHI(2hit)

1-2hit
  • On a Blind Speech Dereverberation Algorithm Using Multi-Channel Linear Prediction

    Marc DELCROIX  Takafumi HIKICHI  Masato MIYOSHI  

     
    PAPER-Engineering Acoustics

      Vol:
    E89-A No:10
      Page(s):
    2837-2846

    It is well known that speech captured in a room by distant microphones suffers from distortions caused by reverberation. These distortions may seriously damage both speech characteristics and intelligibility, and consequently be harmful to many speech applications. To solve this problem, we proposed a dereverberation algorithm based on multi-channel linear prediction. The method is as follows. First we calculate prediction filters that cancel out the room reverberation but also degrade speech characteristics by causing excessive whitening of the speech. Then, we evaluate the prediction-filter degradation to compensate for the excessive whitening. As the reverberation lengthens, the compensation performance becomes worse due to computational accuracy problems. In this paper, we propose a new computation that may improve compensation accuracy when dealing with long reverberation.

  • Common Acoustical Pole Estimation from Multi-Channel Musical Audio Signals

    Takuya YOSHIOKA  Takafumi HIKICHI  Masato MIYOSHI  Hiroshi G. OKUNO  

     
    PAPER-Engineering Acoustics

      Vol:
    E89-A No:1
      Page(s):
    240-247

    This paper describes a method for estimating the amplitude characteristics of poles common to multiple room transfer functions from musical audio signals received by multiple microphones. Knowledge of these pole characteristics would make it easier to manipulate audio equalizers, since they correspond to the room resonance. It has been proven that an estimate of the poles can be calculated precisely when a source signal is white. However, if a source signal is colored as in the case of a musical audio signal, the estimate is degraded by the frequency characteristics originally contained in the source signal. In this paper, we consider that an amplitude spectrum of a musical audio signal consists of its envelope and fine structure. We assume that musical pieces can be classified into several categories according to their average amplitude spectral envelopes. Based on this assumption, the amplitude spectral envelope of the musical audio signal can be obtained from prior knowledge of the average amplitude spectral envelope of a musical piece category into which the target piece is classified. On the other hand, the fine structure is identified based on its time variance. By removing both the spectral envelope and the fine structure from the amplitude spectrum estimated with the conventional method, the amplitude characteristics of the acoustical poles can be extracted. Simulation results for 20 popular songs revealed that our method was capable of estimating the amplitude characteristics of the acoustical poles with a spectral distortion of 3.11 dB. In particular, most of the spectral peaks, corresponding to the room resonance modes, were successfully detected.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.