R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation


Abstract
Validating autonomous driving (AD) systems requires diverse and safety-critical testing, making photorealistic virtual environments essential. Traditional simulation platforms, while controllable, are resource-intensive to scale and often suffer from a domain gap with real-world data. In contrast, neural reconstruction methods like 3D Gaussian Splatting (3DGS) offer a scalable solution for creating photorealistic digital twins of real-world driving scenes. However, they struggle with dynamic object manipulation and reusability as their per-scene optimization-based methodology tends to result in incomplete object models with integrated illumination effects. This paper introduces R3D2, a lightweight, one-step diffusion model designed to overcome these limitations and enable realistic insertion of complete 3D assets into existing scenes by generating plausible rendering effects—such as shadows and consistent lighting—in real time. This is achieved by training R3D2 on a novel dataset: 3DGS object assets are generated from in-the-wild AD data using an image-conditioned 3D generative model, and then synthetically placed into neural rendering-based virtual environments, allowing R3D2 to learn realistic integration. Quantitative and qualitative evaluations demonstrate that R3D2 significantly enhances the realism of inserted assets, enabling use-cases like text-to-3D asset insertion and cross-scene/dataset object transfer, allowing for true scalability in AD validation.
Object Rotation
We show how our method is able to help improve rendering of manipulated actors. We rotate all actors 20 degrees and run the rendered image with R3D2.


Cross-Dataset Object Insertion
In these we replaced the assets of 3DGS scenes from Waymo with 3DGS assets from PandaSet sequences.


Text-to-3D Object Insertion


Bibtex
@article{ljungbergh2025r3d2,
title = {R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation},
author = {Ljungbergh, William and Taveira, Bernardo and Zheng, Wenzhao and Tonderski, Adam and Peng, Chensheng and Kahl, Fredrik and Petersson, Christoffer and Felsberg, Michael and Keutzer, Kurt and Tomizuka, Masayoshi and Zhan, Wei},
journal = {arXiv preprint arXiv:2506.07826},
year = {2025}
}