Keyword Search Result

[Keyword] image(1441hit)

1421-1440hit(1441hit)

  • Integration of Color and Range Data for Three-Dimensional Scene Description

    Akira OKAMOTO  Yoshiaki SHIRAI  Minoru ASADA  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    501-506

    This paper describes a method for describing a three-dimensional (3-D) scene by integrating color and range data. Range data is obtained by a feature-based stereo method developed in our laboratory. A color image is segmented into uniform color regions. A plane is fitted to the range data inside a segmented region. Regions are classified into three types based on the range data. A certain types of regions are merged and the others remain unless the region type is modified. The region type is modified if the range data on a plane are selected by removing of the some range data. As a result, the scene is represented by planar surfaces with homogeneous colors. Experimental results for real scenes are shown.

  • Adaptive Restoration of Degraded Binary MRF Images Using EM Method

    Tatsuya YAMAZAKI  Mehdi N.SHIRAZI  Hideki NODA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:2
      Page(s):
    259-268

    An adaptive restoration algorithm is developed for binary images degraded nonadditively with flip noises. The true image is assumed to be a realization of a Markov Random Field (MRF) and the nonadditive flip noises are assumed to be statistically independent and asymmetric. Using the Expectation and Maximization (EM) method and approximating the Baum's auxiliary function, the degraded image is restored iteratively. The algorithm is implemented as follows. First, the unknown parameters and the true image are guessed or estimated roughly. Second, using the true image estimate, the Baum's auxiliary function is approximated and then the noise and MRF parameters are reestimated. To reestimate the MRF parameters the Maximum Pseudo-likelihood (MPL) method is used. Third, using the Iterated Conditional Modes (ICM) method, the true image is reestimated. The second and third steps are carried out iteratively until by some ad hoc criterion a critical point of EM algorithm is approximated. A number of simulation examples are presented which show the effectiveness of the algorithm and the parameter estimation procedures.

  • Fiber Optic Microwave Links Using Balanced/Image Canceling Photodiode Mixing

    Hideki KAMITSUNA  Hiroyo OGAWA  

     
    PAPER-Optical-Microwave Mixers

      Vol:
    E76-C No:2
      Page(s):
    264-270

    This paper proposes fiber optic link configurations for use in microwave and millimeter-wave transmission Higher frequencies,such as millimeter-waves, are well suited to transmission of broadband signals. Photodiodes can operate simultaneously as optical detectors and microwave frequency mixers thanks to their inherent nonlinearities. This allows us to increase the output radio frequncy. But, this also generates undesired spurious frequencies, necessitating the use of microwave filters. We discuss here two fiber optic link configurations, i.e., balanced/image canceling photodiode mixing links utilizing the combination of microwave functional components and optical devices to suppress the local/image frequency without filters. These configurations are experimentally investigated at microwave frequencies and local/image frequency suppression is successfully demonstrated.

  • Conversion of Image Resolutions for High Quality Visual Communication

    Saprangsit MRUETUSATORN  Hirotsugu KINOSHITA  Yoshinori SAKAI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:2
      Page(s):
    251-258

    This paper discusses the conversion of spatial resolution (pixel density) and amplitude resolution (levels of brightness) for multilevel images. A source image is sampled by an image scanner or a video camera, and a converted image is printed by a printer with the capability of higher spatial but lower amplitude resolution than the image input device. In the proposed method, the impulse response of the scanner sensor is modeled to obtain pixel values from the convolution of the impulse and the image signal. Discontinuous areas (edge) of the original image are detected locally according to the impulse model and neighbouring pixel values. The edge route is estimated which gives the pixel values for the output resolutions. Comparison of the proposed method with two conventional methods, reciprocal distance weight interpolation and pixel replication, shows higher edge quality for the proposed method.

  • High-Definition Television (HDTV) Solid State Image Sensors

    Sohei MANABE  Nozomu HARADA  

     
    INVITED PAPER-LSI Technology for Opto-Electronics

      Vol:
    E76-C No:1
      Page(s):
    78-85

    High-Definition Television (HDTV) 2 million pixel solid state image sensors with high performances are realized, applicable for 1 inch optical format. Key technical aspects of HDTV image sensors are suppression of smear level by maintaining large optical aperture and high readout signal rate by introducing a dual channel horizontal register. From such a perspective, new HDTV image sensors such as Stack CCD, Frame-Interline Transfer (FIT) CCD and Charge Modulation Device (CMD) are developed.

  • Matching of Edge-Line Images Using Relaxation

    Masao IZUMI  Takeshi ASANO  Kunio FUKUNAGA  Hideto MURATA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:6
      Page(s):
    902-908

    In this pater, we propose a method for matching of two images (stereo, motion stereo, etc.) using relaxation. We have already proposed an algebraic expression of line images using unit vectors, and matching method based on similarity measure between two image graphs. This similarity measure of images is insensitive to scaling, rotation, gray level modification and small motion between the two images in the case when we examine image registration or image matching. The approach based on the line structural similarity results in high rate of correspondence between nodes of the two images. In order to obtain higher rate of correspondence, we introduce a relaxation method when examine the degree of similarity between the two images. Our relaxation method improves a relational similarity of correspondence between two image graphs in an iterative manner. The relational similarity is defined as a correct likelihood of correspondence between nodes in consideration of connective relationship of the image graphs. Finally, we show several experimental results which confirm effectiveness of our approach.

  • Image Restoration with Signal Dependent Noise and Blur

    Hiroshi KONDO  Yoshinobu MAKINO  Hidetoshi HIRAI  

     
    PAPER

      Vol:
    E75-A No:9
      Page(s):
    1110-1115

    A new image restoration filter based upon a physical model of an image degradation is constructed. By means of this filter, signal dependent noise and blur can be suppressed. In particular, image degradation noise can be modeled in generalized form. Noise suppression and deblurring are performed separately. This filter has additional applications when used in conjunction with the degradation model, such as real photographic images and photoelectronic images. Simulation results show that this filter gives a superior performance in restoring an image degraded by signal dependent noise and blur.

  • An Active Reflector for SAR Calibration Having a Frequency Shift Capability

    Masaharu FUJITA  

     
    LETTER-Radio Communication

      Vol:
    E75-B No:8
      Page(s):
    791-793

    This letter proposes an active reflector for calibrating a synthetic aperture radar (SAR), in which the frequency of a received SAR signal is shifted by a certain amount and then it is retransmitted to the SAR. The frequency shift causes a shift of the reflector SAR image in an azimuth direction relative to its background. This function would allow to enhance a signal-to-clutter ratio of the reflector image by moving it onto a radiometrically dark background, and hence it would be of value for SAR calibration even in a narrow test site. The theory, design and development are described briefly.

  • A 15 GFLOPS Parallel DSP System for Super High Definition Image Processing

    Tomoko SAWABE  Tetsurou FUJII  Hiroshi NAKADA  Naohisa OHTA  Sadayasu ONO  

     
    INVITED PAPER

      Vol:
    E75-A No:7
      Page(s):
    786-793

    This paper describes a super high definition (SHD) image processing system we have developed. The computing engine of this system is a parallel processing system with 128 processing elements called NOVI- HiPIPE. A new pipelined vector processor is introduced as a backend processor of each processing element in order to meet the great computing power required by SHD image processing. This pipelined vector processor can achieve 120 MFLOPS. The 128 pipelined vector processors installed in NOVI- HiPIPE yield a total system peak performance of 15 GFLOPS. The SHD image processing system consists of an SHD image scanner, and SHD image storage node, a full color printer, a film recorder, NOVI- HiPIPE, and a Super Frame Memory. The Super Frame Memory can display a ful color moving image sequence at a rate of 60 fps on a CRT monitor at a resolution of 2048 by 2048 pixels. Workstations, interconnected through an Ethernet, are used to control these units, and SHD image data can be easily transfered among the units. NOVI- HiPIPE has a frame memory which can display SHD still images on a color monitor, therefore, one processed frame can be directly displayed. We are developing SHD image processing algorithms and parallel processing methodologies using this system.

  • Property of Circular Convolution for Subband Image Coding

    Hitoshi KIYA  Kiyoshi NISHIKAWA  Masahiko SAGAWA  

     
    PAPER-Image Coding and Compression

      Vol:
    E75-A No:7
      Page(s):
    852-860

    One of the problems with subband image coding is the increase in image sizes caused by filtering. To solve this, it has been proposed to process the filtering by transforming input sequence into a periodic one. Then filtering is implemented by circular convolution. Although this technique solves the problem, there are very strong restrictions, i.e., limitation on the filter type and on the filter bank structure. In this paper, development of this technique is presented. Consequently, any type of linear phase FIR filter and any structure of filter bank can be used.

  • On Quality Improvement of Reconstructed Images in Diffraction Tomography

    Haruyuki HARADA  Mitsuru TANAKA  Takashi TAKENAKA  

     
    LETTER

      Vol:
    E75-A No:7
      Page(s):
    910-913

    This letter discusses the quality improvement of reconstructed images in diffraction tomography. An efficient iterative procedure based on the modified Newton-Kantorovich method and the Gerchberg-Papoulis algorithm is presented. The simulated results demonstrate the property of high-quality reconstruction even for cases where the first-order Born approximation fails.

  • Lossless Image Compression by Two-Dimensional Linear Prediction with Variable Coefficients

    Nobutaka KUROKI  Takanori NOMURA  Masahiro TOMITA  Kotaro HIRANO  

     
    PAPER-Image Coding and Compression

      Vol:
    E75-A No:7
      Page(s):
    882-889

    A lossless image compression method based on two-dimensional (2D) linear prediction with variable coefficients is proposed. This method employs a space varying autoregressive (AR) model. To achieve a higher compression ratio, the method introduces new ideas in three points: the level conversion, the fast recursive parameter estimation, and the switching method for coding table. The level conversion prevents an AR model from predicting gray-level which does not exist in an image. The fast recursive parameter estimation algorithm proposed here calculates varying coefficients of linear prediction at each pixel in shorter time than conventional one. For encoding, the mean square error between the predicted value and the true one is calculated in the local area. This value is used to switch the coding table at each pixel to adapt it to the local statistical characteristics of an image. By applying the proposed method to "Girl" and "Couple" of IEEE monochromatic standard images, the compression ratios of 100 : 46 and 100 : 44 have been achieved, respectively. These results are superior to the best results (100 : 61 and 100 : 57) obtained by the approach under JPEG recommendations.

  • Subband Coding of Super High Definition Images Using Entropy Coded Vector Quantization

    Mitsuru NOMURA  Isao FURUKAWA  Tetsurou FUJII  Sadayasu ONO  

     
    PAPER-Image Coding and Compression

      Vol:
    E75-A No:7
      Page(s):
    861-870

    This paper discusses the bit-rate compression of super high definition still images with subband coding. Super high definition (SHD) images with more than 20482048 pixels or resolution are introduced as the next generation imaging system beyond HDTV. In order to develop bit-rate reduction algorithms, an image evaluation system for super high definition images is assembled. Signal characteristics are evaluated and the optimum subband analysis/synthesis system for the SHD images is clarified. A scalar quantization combined with run-length and Huffman coding is introduced as a conventional subband coding algorithm, and its coding performance is evaluated for SHD images. Finally, new coding algorithms based on block Huffman coding and entropy coded vector quantization are proposed. SNR improvement of 0.5 dB and 1.0 dB can be achieved with the proposed block Huffman coding and the vector quantization algorithm, respectively.

  • Fast Image Generation Method for Animation

    Jin-Han KIM  Chong-Min KYUNG  

     
    PAPER-Combinational/Numerical/Graphic Algorithms

      Vol:
    E75-A No:6
      Page(s):
    691-700

    A fast scan-line algorithm for a raster-scan graphics display is proposed based on an observation that a sequence of successive image frames in animation mostly consists of still objects with relatively few moving objects. In the proposed algorithm, successive images are generated using the background image composed of still objects only, and moving image composed only of moving objects. The color of each pixel in the successive images is then determined by one, which is nearer from eye, between the two candidate pixels, where one is from the background image and the other is from the moving image. The background image is generated once in the whole process, while the moving image is generated for each time frame using an interpolation of two images generated at the start and end time of the given time interval. For the purpose of fast shadow generation, we classify the shadows into three groups, i.e., still shadows generated by still objects on still objects, moving shadows generated by moving objects on still objects, and composite shadows generated by both still objects and moving objects on moving objects. These shadows can be generated very quickly by utilizing the frame coherence. According to the experimental results, a speed up factor of 3.2 to 12.8, depending on the percentage of the moving objects among all objects, was obtained using our algorithm, compared to the conventional scheme not utilizing the frame-to-frame image coherence.

  • High-Fidelity Sub-Band Coding for Very High Resolution Images

    Takahiro SAITO  Hirofumi HIGUCHI  Takashi KOMATSU  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    327-339

    Very high resolution images with more than 2,000*2.000 pels will play a very important role in a wide variety of applications of future multimedia communications ranging from electronic publishing to broadcasting. To make communication of very high resolution images practicable, we need to develop image coding techniques that can compress very high resolution images efficiently. Taking the channel capacity limitation of the future communication into consideration, the requisite compression ratio will be estimated to be at least 1/10 to 1/20 for color signals. Among existing image coding techniques, the sub-band coding technique is one of the most suitable techniques. With its applications to high-fidelity compression of very high resolution images, one of the major problem is how to encode high frequency sub-band signals. High frequency sub-band signals are well modeled as having approximately memoryless probability distribution, and hence the best way to solve this problem is to improve the quantization of high frequency sub-band signals. From the standpoint stated above, the work herein first compares three different scalor quantization schemes and improved permutation codes, which the authors have previously developed extending the concept of permutation codes, from the aspect of quantization performance for a memoryless probability distribution that well approximates the real statistical properties of high frequency sub-band signals, and thus demonstrates that at low coding rates improved permutation codes outperform the other scalor quatization schemes and that its superiority decreases as its coding rate increases. Moreover, from the results stated above, the work herein, develops a rate-adaptive quantization technique where the number of bits assigned to each subblock is determined according to the signal variance within the subblock and the proper quantization scheme is chosen from among different types of quantization schemes according to the allocated number of bits, and applies it to the high-fidelity encoding of sub-band signals of very high resolution images to demonstrate its usefulness.

  • Image Compression and Regeneration by Nonlinear Associative Silicon Retina

    Mamoru TANAKA  Yoshinori NAKAMURA  Munemitsu IKEGAMI  Kikufumi KANDA  Taizou HATTORI  Yasutami CHIGUSA  Hikaru MIZUTANI  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    586-594

    Threre are two types of nonlinear associative silicon retinas. One is a sparse Hopfield type neural network which is called a H-type retina and the other is its dual network which is called a DH-type retina. The input information sequences of H-type and HD-type retinas are given by nodes and links as voltages and currents respectively. The error correcting capacity (minimum basin of attraction) of H-type and DH-type retinas is decided by the minimum numbers of links of cutset and loop respectively. The operation principle of the regeneration is based on the voltage or current distribution of the neural field. The most important nonlinear operation in the retinas is a dynamic quantization to decide the binary value of each neuron output from the neighbor value. Also, the edge is emphasized by a line-process. The rates of compression of H-type and DH-type retinas used in the simulation are 1/8 and (2/3) (1/8) respectively, where 2/3 and 1/8 mean rates of the structural and binarizational compression respectively. We could have interesting and significant simulation results enough to make a chip.

  • Perceptually Transparent Coding of Still Images

    V. Ralph ALGAZI  Todd R. REED  Gary E. FORD  Eric MAURINCOMME  Iftekhar HUSSAIN  Ravindra POTHARLANKA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    340-348

    The encoding of high quality and super high definition images requires new approaches to the coding problem. The nature of such images and the applications in which they are used prohibits the introduction of perceptible degradation by the coding process. In this paper, we discuss techniques for the perceptually transparent coding of images. Although technically lossy methods, images encoded and reconstructed using these techniques appear identical to the original images. The reconstructed images can be postprocessed (e.g., enhanced via anisotropic filtering), due to the absence of structured errors, commonly introduced by conventional lossy methods. The compression, ratios obtained are substantially higher than those achieved using lossless means.

  • A Mean-Separated and Normalized Vector Quantizer with Edge-Adaptive Feedback Estimation and Variable Bit Rates

    Xiping WANG  Shinji OZAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:3
      Page(s):
    342-351

    This paper proposes a Mean-Separated and Normalized Vector Quantizer with edge-Adaptive Feedback estimation and variable bit rates (AFMSN-VQ). The basic idea of the AFMSN-VQ is to estimate the statistical parameters of each coding block from its previous coded blocks and then use the estimated parameters to normalize the coding block prior to vector quantization. The edge-adaptive feedback estimator utilizes the interblock correlations of edge connectivity and gray level continuity to accurately estimate the mean and standard deviation of the coding block. The rate-variable VQ is to diminish distortion nonuniformity among image blocks of different activities and to improve the reconstruction quality of edges and contours to which the human vision is sensitive. Simulation results show that up to 2.7dB SNR gain of the AFMSN-VQ over the non-adaptive FMSN-VQ and up to 2.2dB over the 1616 ADCT can be achieved at 0.2-1.0 bit/pixel. Furthermore, the AFMSN-VQ shows a comparable coding performance to ADCT-VQ and A-PE-VQ.

  • Model-Based/Waveform Hybrid Coding for Low-Rate Transmission of Facial Images

    Yuichiro NAKAYA  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    377-384

    Despite its potential to realize image communication at extremely low rates, model-based coding (analysis-synthesis coding) still has problems to be solved for any practical use. The main problems are the difficulty in modeling unknown objects and the presence of analysis errors. To cope with these difficulties, we incorporate waveform coding into model-based coding (model-based/waveform hybrid coding). The incorporated waveform coder can code unmodeled objects and cancel the artifacts caused by the analysis errors. From a different point of view, the performance of the practically used waveform coder can be improved by the incorporation of model-based coding. Since the model-based coder codes the modeled part of the image at extremely low rates, more bits can be allocated for the coding of the unmodeled region. In this paper, we present the basic concept of model-based/waveform hybrid coding. We develop a model-based/MC-DCT hybrid coding system designed to improve the performance of the practically used MC-DCT coder. Simulation results of the system show that this coding method is effective at very low transmission rates such as 16kb/s. Image transmission at such low rates is quite difficult for an MC-DCT coder without the contribution of the model-based coder.

  • 3D Facial Model Creation Using Generic Model and Front and Side Views of Face

    Takaaki AKIMOTO  Yasuhito SUENAGA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:2
      Page(s):
    191-197

    This paper presents an automatic creation method of 3D facial models which are needed for facial image generation by 3D computer graphics. A 3D facial model of a specific person is obtained from just the front and side view images without any human operation. The method has two parts; feature extraction and generic model modification. In the feature extraction part, the regions or edges which express the facial features such as eyes, nose, mouth or chin outline are extracted from the front and side view images. A generic head model is then modified based on the position and shape of the extracted facial features in the generic model modification part. As a result, a 3D model for persons is obtained. By using the specific model and the front and side view images, texture-mapped facial images can be generated easily.

1421-1440hit(1441hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.