3D car modeling is crucial for applications in autonomous driving systems, virtual and augmented reality, and gaming. However, due to the distinctive properties of cars, such as highly reflective and transparent surface materials, existing methods often struggle to achieve accurate 3D car reconstruction. To address these limitations, we propose Car-GS, a novel approach designed to mitigate the effects of specular highlights and the coupling of RGB and geometry in 3D geometric and shading reconstruction (3DGS). Our method incorporates three key innovations: First, we introduce view-dependent Gaussian primitives to effectively model surface reflections. Second, we identify the limitations of using a shared opacity parameter for both image rendering and geometric attributes when modeling transparent objects. To overcome this, we assign a learnable geometry-specific opacity to each 2D Gaussian primitive, dedicated solely to rendering depth and normals. Third, we observe that reconstruction errors are most prominent when the camera view is nearly orthogonal to glass surfaces. To address this issue, we develop a qualityaware supervision module that adaptively leverages normal priors from a pre-trained large-scale normal model. Experimental results demonstrate that Car-GS achieves precise reconstruction of car surfaces and significantly outperforms prior methods.
3D 汽车建模对于自动驾驶系统、虚拟与增强现实以及游戏等应用至关重要。然而,由于汽车的独特特性,例如高度反射和透明的表面材料,现有方法往往难以实现准确的 3D 汽车重建。为了解决这些局限性,我们提出了 Car-GS,这是一种旨在缓解 3D 几何和光照重建 (3DGS) 中镜面高光以及 RGB 与几何耦合影响的新方法。我们的方法包含以下三个关键创新: 首先,我们引入视角依赖的高斯基元,有效地建模了表面反射。 其次,我们识别到在建模透明物体时,共享的不透明参数同时用于图像渲染和几何属性会导致局限性。为此,我们为每个 2D 高斯基元分配了一个专用于渲染深度和法线的可学习几何特定透明度。 第三,我们观察到,当相机视角几乎垂直于玻璃表面时,重建误差最为显著。为解决这一问题,我们开发了一种基于质量感知的监督模块,该模块自适应地利用来自预训练的大规模法线模型的法线先验。 实验结果表明,Car-GS 在汽车表面重建方面表现出色,显著优于现有方法。