LiDPM: Rethinking Point Diffusion for Lidar Scene Completion

IEEE IV 2025

1Inria, 2valeo.ai, 3LIGM, ENPC, Univ Gustave Eiffel, CNRS, France
LiDPM (ours) w/o refinement, implementing global diffusion LiDiff w/o refinement, implementing local diffusion

LiDPM (left) formulation follows the general DDPM paradigm, yielding more realistic and accurate completions than LiDiff[1] local diffusion (right).

Abstract

Training diffusion models that work directly on lidar points at the scale of outdoor scenes is challenging due to the difficulty of generating fine-grained details from white noise over a broad field of view. The latest works addressing scene completion with diffusion models tackle this problem by reformulating the original DDPM as a local diffusion process. It contrasts with the common practice of operating at the object level where vanilla DDPMs are used.

In this work, we close the gap between these two lines of work. We identify approximations in the local diffusion formulation, show that they are not required to operate at the scene level, and that a vanilla DDPM with a well-chosen starting point is enough for completion. Finally, we demonstrate that our method, LiDPM, leads to better results in scene completion on SemanticKITTI[2].

BibTeX

@INPROCEEDINGS{martyniuk2025lidpm,
      author    = {Martyniuk, Tetiana and Puy, Gilles and Boulch, Alexandre and Marlet, Renaud and de Charette, Raoul},
      booktitle = {2025 IEEE Intelligent Vehicles Symposium (IV)},
      title     = {LiDPM: Rethinking Point Diffusion for Lidar Scene Completion},
      year      = {2025},
    }

References

  1. L. Nunes, R. Marcuzzi, B. Mersch, J. Behley, C. Stachniss. Scaling diffusion models to real-world 3d lidar scene completion. CVPR, 2024. [arXiv]
  2. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, J. Gall. SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. CVPR, 2019. [arXiv]

Acknowledgements

This work has been carried out using HPC resources from GENCI-IDRIS (grants AD011014484R1, AD011012883R3).
We thank Mickael Chen, Corentin Sautier and Mohammad Fahes for the proof-reading and valuable feedback.