SpotLessSplats: Ignoring Distractors in 3D Gaussian Splatting

ACM Transactions on Graphics (TOG) 2025

Sara Sabour*
Google DeepMind
University of Toronto
Lily Goli*
Google DeepMind
University of Toronto
George Kopanas
Google DeepMind
Mark Matthews
Google DeepMind
Dmitry Lagun
Google DeepMind
Leonidas Guibas
Google DeepMind
Stanford University
Alec Jacobson
University of Toronto
David J. Fleet
Google DeepMind
University of Toronto
Andrea Tagliasacchi
Google DeepMind
University of Toronto
Simon Fraser University
  • Paper

  • arXiv

  • Code

Abstract

3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds, making it suitable for real-time applications. However, current methods require highly controlled environments—no moving people or wind-blown elements, and consistent lighting—to meet the inter-view consistency assumption of 3DGS. This makes reconstruction of real-world captures problematic. We present SpotLessSplats, a novel approach that leverages Stable Diffusion features and robust optimization to effectively ignore transient distractors. Our method achieves state-of-the-art reconstruction quality both visually and quantitatively, on casual captures.

Results

Novel view synthesis on the casually captured bike scene:

We show video comparison to vanilla 3DGS on RobustNeRF dataset:


We also show video comparison to vanilla 3DGS on NeRf On-the-go dataset:

Patio

Corner

Spot

Patio (High Occlusion)

Mountain

Fountain

Citation