Keyword Search Result

[Keyword] total variation(11hit)

1-11hit
  • Single Image Dehazing Based on Weighted Variational Regularized Model

    Hao ZHOU  Hailing XIONG  Chuan LI  Weiwei JIANG  Kezhong LU  Nian CHEN  Yun LIU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/04/06
      Vol:
    E104-D No:7
      Page(s):
    961-969

    Image dehazing is of great significance in computer vision and other fields. The performance of dehazing mainly relies on the precise computation of transmission map. However, the computation of the existing transmission map still does not work well in the sky area and is easily influenced by noise. Hence, the dark channel prior (DCP) and luminance model are used to estimate the coarse transmission in this work, which can deal with the problem of transmission estimation in the sky area. Then a novel weighted variational regularization model is proposed to refine the transmission. Specifically, the proposed model can simultaneously refine the transmittance and restore clear images, yielding a haze-free image. More importantly, the proposed model can preserve the important image details and suppress image noise in the dehazing process. In addition, a new Gaussian Adaptive Weighted function is defined to smooth the contextual areas while preserving the depth discontinuity edges. Experiments on real-world and synthetic images illustrate that our method has a rival advantage with the state-of-art algorithms in different hazy environments.

  • Analysis on Wave-Velocity Inverse Imaging for the Supporting Layer in Ballastless Track

    Yong YANG  Junwei LU  Baoxian WANG  Weigang ZHAO  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2020/04/08
      Vol:
    E103-D No:7
      Page(s):
    1760-1764

    The concrete quality of supporting layer in ballastless track is important for the safe operation of a high-speed railway (HSR). However, the supporting layer is covered by the upper track slab and the functional layer, and it is difficult to detect concealed defects inside the supporting layer. To solve this problem, a method of elastic wave velocity imaging is proposed to analyze the concrete quality. First, the propagation path of the elastic wave in the supporting layer is analyzed, and a head-wave arrival-time (HWAT) extraction method based on the wavelet spectrum correlation analysis (WSCA) is proposed. Then, a grid model is established to analyze the relationships among the grid wave velocity, travel route, and travel time. A loss function based on the total variation is constructed, and an inverse method is applied to evaluate the elastic wave velocity in the supporting layer. Finally, simulation and field experiments are conducted to verify the suppression of noise signals and the accuracy of an inverse imaging for the elastic wave velocity estimation. The results show that the WSCA analysis could extract the HWAT efficiently, and the inverse imaging method could accurately estimate wave velocity in the supporting layer.

  • Blind Image Deblurring Using Weighted Sum of Gaussian Kernels for Point Spread Function Estimation

    Hong LIU  BenYong LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/08/05
      Vol:
    E98-D No:11
      Page(s):
    2026-2029

    Point spread function (PSF) estimation plays a paramount role in image deblurring processing, and traditionally it is solved by parameter estimation of a certain preassumed PSF shape model. In real life, the PSF shape is generally arbitrary and complicated, and thus it is assumed in this manuscript that a PSF may be decomposed as a weighted sum of a certain number of Gaussian kernels, with weight coefficients estimated in an alternating manner, and an l1 norm-based total variation (TVl1) algorithm is adopted to recover the latent image. Experiments show that the proposed method can achieve satisfactory performance on synthetic and realistic blurred images.

  • Robust Segmentation of Highly Dynamic Scene with Missing Data

    Yinhui ZHANG  Zifen HE  Changyu LIU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2014/09/29
      Vol:
    E98-D No:1
      Page(s):
    201-205

    Segmenting foreground objects from highly dynamic scenes with missing data is very challenging. We present a novel unsupervised segmentation approach that can cope with extensive scene dynamic as well as a substantial amount of missing data that present in dynamic scene. To make this possible, we exploit convex optimization of total variation beforehand for images with missing data in which depletion mask is available. Inpainting depleted images using total variation facilitates detecting ambiguous objects from highly dynamic images, because it is more likely to yield areas of object instances with improved grayscale contrast. We use a conditional random field that adapts to integrate both appearance and motion knowledge of the foreground objects. Our approach segments foreground object instances while inpainting the highly dynamic scene with a variety amount of missing data in a coupled way. We demonstrate this on a very challenging dataset from the UCSD Highly Dynamic Scene Benchmarks (HDSB) and compare our method with two state-of-the-art unsupervised image sequence segmentation algorithms and provide quantitative and qualitative performance comparisons.

  • Spatially Adaptive Logarithmic Total Variation Model for Varying Light Face Recognition

    Biao WANG  Weifeng LI  Zhimin LI  Qingmin LIAO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:1
      Page(s):
    155-158

    In this letter, we propose an extension to the classical logarithmic total variation (LTV) model for face recognition under variant illumination conditions. LTV treats all facial areas with the same regularization parameters, which inevitably results in the loss of useful facial details and is harmful for recognition tasks. To address this problem, we propose to assign the regularization parameters which balance the large-scale (illumination) and small-scale (reflectance) components in a spatially adaptive scheme. Face recognition experiments on both Extended Yale B and the large-scale FERET databases demonstrate the effectiveness of the proposed method.

  • Colorization Based Image Coding by Using Local Correlation between Luminance and Chrominance

    Yoshitaka INOUE  Takamichi MIYATA  Yoshinori SAKAI  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E95-D No:1
      Page(s):
    247-255

    Recently, a novel approach to color image compression based on colorization has been presented. The conventional method for colorization-based image coding tends to lose the local oscillation of chrominance components that the original images had. A large number of color assignments is required to restore these oscillations. On the other hand, previous studies suggest that an oscillation of a chrominance component correlates with the oscillation of a corresponding luminance component. In this paper, we propose a new colorization-based image coding method that utilizes the local correlation between texture components of luminance and chrominance. These texture components are obtained by a total variation regularized energy minimization method. The local correlation relationships are approximated by linear functions, and their coefficients are extracted by an optimization method. This key idea enables us to represent the oscillations of chrominance components by using only a few pieces of information. Experimental results showed that our method can restore the local oscillation and code images more efficiently than the conventional method, JPEG, or JPEG2000 at a high compression rate.

  • Maxima Exploitation for Reference Blurring Function in Motion Deconvolution

    Rachel Mabanag CHONG  Toshihisa TANAKA  

     
    PAPER-Digital Signal Processing

      Vol:
    E94-A No:3
      Page(s):
    921-928

    The actual blurring function or point spread function (PSF) in an image, in most cases, is similar to a parametric or ideal model. Recently proposed blind deconvolution methods employ this idea for learning during the estimation of PSF. Its dependence on the estimated values may result in ineffective learning when the model is erroneously selected. To overcome this problem, we propose to exploit the image maxima in order to extract a reference point spread function (RPSF). This is only dependent on the degraded image and has a structure that closely resembles a parametric motion blur assuming a known blur support size. Its usage will result in a more stable learning and estimation process since it does not change with respect to iteration or any estimated value. We define a cost function in the vector-matrix form which accounts for the blurring function contour as well as learning towards the RPSF. The effectiveness of using RPSF and the proposed cost function under various motion directions and support sizes will be demonstrated by the experimental results.

  • Image Quality Enhancement for Single-Image Super Resolution Based on Local Similarities and Support Vector Regression

    Atsushi YAGUCHI  Tadaaki HOSAKA  Takayuki HAMAMOTO  

     
    LETTER-Processing

      Vol:
    E94-A No:2
      Page(s):
    552-554

    In reconstruction-based super resolution, a high-resolution image is estimated using multiple low-resolution images with sub-pixel misalignments. Therefore, when only one low-resolution image is available, it is generally difficult to obtain a favorable image. This letter proposes a method for overcoming this difficulty for single- image super resolution. In our method, after interpolating pixel values at sub-pixel locations on a patch-by-patch basis by support vector regression, in which learning samples are collected within the given image based on local similarities, we solve the regularized reconstruction problem with a sufficient number of constraints. Evaluation experiments were performed for artificial and natural images, and the obtained high-resolution images indicate the high-frequency components favorably along with improved PSNRs.

  • Image-Processing Approach Based on Nonlinear Image-Decomposition

    Takahiro SAITO  Takashi KOMATSU  

     
    INVITED PAPER

      Vol:
    E92-A No:3
      Page(s):
    696-707

    It is a very important and intriguing problem in digital image-processing to decompose an input image into intuitively convincible image-components such as a structure component and a texture component, which is an inherently nonlinear problem. Recently, several numerical schemes to solve the nonlinear image-decomposition problem have been proposed. The use of the nonlinear image-decomposition as a pre-process of several image-processing tasks will possibly pave the way to solve difficult problems posed by the classic approach of digital image-processing. Since the new image-processing approach via the nonlinear image-decomposition treats each separated component with a processing method suitable to it, the approach will successfully attain target items seemingly contrary to each other, for instance invisibility of ringing artifacts and sharpness of edges and textures, which have not attained simultaneously by the classic image-processing approach. This paper reviews quite recently developed state-of-the-art schemes of the nonlinear image-decomposition, and introduces some examples of the decomposition-and-processing approach.

  • Kernel TV-Based Quotient Image Employing Gabor Analysis and Its Application to Face Recognition

    GaoYun AN  JiYing WU  QiuQi RUAN  

     
    LETTER-Pattern Recognition

      Vol:
    E91-D No:5
      Page(s):
    1573-1576

    In order to overcome the drawback of TVQI and to utilize the property of dimensionality increasing techniques, a novel model for Kernel TV-based Quotient Image employing Gabor analysis is proposed and applied to face recognition with only one sample per subject. To deal with illumination outliers, an enhanced TV-based quotient image (ETVQI) model is first adopted. Then for preprocessed images by ETVQI, a bank of Gabor filters is built to extract features at specified scales and orientations. Lastly, KPCA is introduced to extract final high-order and nonlinear features of extracted Gabor features. According to experiments on the CAS-PEAL face database, our model could outperform Gabor-based KPCA, TVQI and Gabor-based TVQI when they face most outliers (illumination, expression, masking etc.).

  • An Edge-Preserving Super-Precision for Simultaneous Enhancement of Spacial and Grayscale Resolutions

    Hiroshi HASEGAWA  Toshinori OHTSUKA  Isao YAMADA  Kohichi SAKANIWA  

     
    PAPER-Image

      Vol:
    E91-A No:2
      Page(s):
    673-681

    In this paper, we propose a method that recovers a smooth high-resolution image from several blurred and roughly quantized low-resolution images. For compensation of the quantization effect we introduce measurements of smoothness, Huber function that is originally used for suppression of block noises in a JPEG compressed image [Schultz & Stevenson '94] and a smoothed version of total variation. With a simple operator that approximates the convex projection onto constraint set defined for each quantized image [Hasegawa et al. '05], we propose a method that minimizes these cost functions, which are smooth convex functions, over the intersection of all constraint sets, i.e. the set of all images satisfying all quantization constraints simultaneously, by using hybrid steepest descent method [Yamada & Ogura '04]. Finally in the numerical example we compare images derived by the proposed method, Projections Onto Convex Sets (POCS) based conventinal method, and generalized proposed method minimizing energy of output of Laplacian.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.