|
|||
|
|||
|
3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds, making it suitable for real-time applications. However, current methods require highly controlled environments—no moving people or wind-blown elements, and consistent lighting—to meet the inter-view consistency assumption of 3DGS. This makes reconstruction of real-world captures problematic. We present SpotLessSplats, a novel approach that leverages Stable Diffusion features and robust optimization to effectively ignore transient distractors. Our method achieves state-of-the-art reconstruction quality both visually and quantitatively, on casual captures.
Novel view synthesis on the casually captured bike scene:
We show video comparison to vanilla 3DGS on RobustNeRF dataset:
|
|
|
|
We also show video comparison to vanilla 3DGS on NeRf On-the-go dataset:
Patio |
Corner |
Spot |
Patio (High Occlusion) |
Mountain |
Fountain |