Keyword Search Result

[Keyword] point cloud(7hit)

1-7hit
  • Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames

    Keiichiro TAKADA  Yasuaki TOKUMO  Tomohiro IKAI  Takeshi CHUJOH  

     
    LETTER

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1673-1676

    Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.

  • Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders

    Kenshiro TAMATA  Tomohiro MASHITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/10/11
      Vol:
    E105-D No:1
      Page(s):
    134-140

    A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.

  • GECNN for Weakly Supervised Semantic Segmentation of 3D Point Clouds

    Zifen HE  Shouye ZHU  Ying HUANG  Yinhui ZHANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/09/24
      Vol:
    E104-D No:12
      Page(s):
    2237-2243

    This paper presents a novel method for weakly supervised semantic segmentation of 3D point clouds using a novel graph and edge convolutional neural network (GECNN) towards 1% and 10% point cloud with labels. Our general framework facilitates semantic segmentation by encoding both global and local scale features via a parallel graph and edge aggregation scheme. More specifically, global scale graph structure cues of point clouds are captured by a graph convolutional neural network, which is propagated from pairwise affinity representation over the whole graph established in a d-dimensional feature embedding space. We integrate local scale features derived from a dynamic edge feature aggregation convolutional neural networks that allows us to fusion both global and local cues of 3D point clouds. The proposed GECNN model is trained by using a comprehensive objective which consists of incomplete, inexact, self-supervision and smoothness constraints based on partially labeled points. The proposed approach enforces global and local consistency constraints directly on the objective losses. It inherently handles the challenges of segmenting sparse 3D point clouds with limited annotations in a large scale point cloud space. Our experiments on the ShapeNet and S3DIS benchmarks demonstrate the effectiveness of the proposed approach for efficient (within 20 epochs) learning of large scale point cloud semantics despite very limited labels.

  • Calibration of Turntable Based 3D Scanning Systems

    Duhu MAN  Mark W. JONES  Danrong LI  Honglong ZHANG  Zhan SONG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2019/05/30
      Vol:
    E102-D No:9
      Page(s):
    1833-1841

    The consistent alignment of point clouds obtained from multiple scanning positions is a crucial step for many 3D modeling systems. This is especially true for environment modeling. In order to observe the full scene, a common approach is to rotate the scanning device around a rotation axis using a turntable. The final alignment of each frame data can be computed from the position and orientation of the rotation axis. However, in practice, the precise mounting of scanning devices is impossible. It is hard to locate the vertical support of the turntable and rotation axis on a common line, particularly for lower cost consumer hardware. Therefore the calibration of the rotation axis of the turntable is an important step for the 3D reconstruction. In this paper we propose a novel calibration method for the rotation axis of the turntable. With the proposed rotation axis calibration method, multiple 3D profiles of the target scene can be aligned precisely. In the experiments, three different evaluation approaches are used to evaluate the calibration accuracy of the rotation axis. The experimental results show that the proposed rotation axis calibration method can achieve a high accuracy.

  • Radio Propagation Prediction Method Using Point Cloud Data Based on Hybrid of Ray-Tracing and Effective Roughness Model in Urban Environments

    Minoru INOMATA  Tetsuro IMAI  Koshiro KITAO  Yukihiko OKUMURA  Motoharu SASAKI  Yasushi TAKATORI  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2018/07/10
      Vol:
    E102-B No:1
      Page(s):
    51-62

    This paper proposes a radio propagation prediction method that uses point cloud data based on a hybrid of the ray-tracing (RT) method and an effective roughness (ER) model in urban environments for the fifth generation mobile communications system using high frequency bands. The proposed prediction method incorporates propagation characteristics that consider diffuse scattering from surface irregularities. The validity of the proposed method is confirmed by comparisons of measurement and prediction results gained from the proposed method and a conventional RT method based on power delay and angular profiles. From predictions based on the power delay and angular profiles, we find that the proposed method, assuming the roughness of σh=1mm, accurately predicts the propagation characteristics in the 20GHz band for urban line-of-sight environments. The prediction error for the delay spread is 2.1ns to 9.7ns in an urban environment.

  • Non-Blind Deconvolution of Point Cloud Attributes in Graph Spectral Domain

    Kaoru YAMAMOTO  Masaki ONUKI  Yuichi TANAKA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1751-1759

    We propose a non-blind deconvolution algorithm of point cloud attributes inspired by multi-Wiener SURE-LET deconvolution for images. The image reconstructed by the SURE-LET approach is expressed as a linear combination of multiple filtered images where the filters are defined on the frequency domain. The coefficients of the linear combination are calculated so that the estimate of mean squared error between the original and restored images is minimized. Although the approach is very effective, it is only applicable to images. Recently we have to handle signals on irregular grids, e.g., texture data on 3D models, which are often blurred due to diffusion or motions of objects. However, we cannot utilize image processing-based approaches straightforwardly since these high-dimensional signals cannot be transformed into their frequency domain. To overcome the problem, we use graph signal processing (GSP) for deblurring the complex-structured data. That is, the SURE-LET approach is redefined on GSP, where the Wiener-like filtering is followed by the subband decomposition with an analysis graph filter bank, and then thresholding for each subband is performed. In the experiments, the proposed method is applied to blurred textures on 3D models and synthetic sparse data. The experimental results show clearly deblurred signals with SNR improvements.

  • SSM-HPC: Front View Gait Recognition Using Spherical Space Model with Human Point Clouds

    Jegoon RYU  Sei-ichiro KAMATA  Alireza AHRARY  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:7
      Page(s):
    1969-1978

    In this paper, we propose a novel gait recognition framework - Spherical Space Model with Human Point Clouds (SSM-HPC) to recognize front view of human gait. A new gait representation - Marching in Place (MIP) gait is also introduced which preserves the spatiotemporal characteristics of individual gait manner. In comparison with the previous studies on gait recognition which usually use human silhouette images from image sequences, this research applies three dimensional (3D) point clouds data of human body obtained from stereo camera. The proposed framework exhibits gait recognition rates superior to those of other gait recognition methods.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.