Re-direct to the full PAPER, PROJECT PAGE and CODE
Abstrarct
Closed-loop simulation is crucial for end-to-end autonomous driving. Existing sensor simulation methods (e.g., NeRF and 3DGS) reconstruct driving scenes based on conditions that closely mirror training data distributions. However, these methods struggle with rendering novel trajectories, such as lane changes. Recent works have demonstrated that integrating world model knowledge alleviates these issues. Despite their efficiency, these approaches still encounter difficulties in the accurate representation of more complex maneuvers, with multi-lane shifts being a notable example. Therefore, we introduce ReconDreamer, which enhances driving scene reconstruction through incremental integration of world model knowledge. Specifically, DriveRestorer is proposed to mitigate artifacts via online restoration. This is complemented by a progressive data update strategy designed to ensure high-quality rendering for more complex maneuvers. To the best of our knowledge, ReconDreamer is the first method to effectively render in large maneuvers. Experimental results demonstrate that ReconDreamer outperforms Street Gaussians in the NTA-IoU, NTL-IoU, and FID, with relative improvements by 24.87%, 6.72%, and 29.97%. Furthermore,ReconDreamer surpasses DriveDreamer4D with PVG during large maneuver rendering, as verified by a relative improvement of 195.87% in the NTA-IoU metric and a comprehensive user study.
Methodology
Architectures
Cite
If you find this work useful in your research, please cite:
@inproceedings{Ni2024ReconDreamerCW,
title={ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration},
author={Chaojun Ni and Guosheng Zhao and Xiaofeng Wang and Zheng Zhu and Wenkang Qin and Guan Huang and Chen Liu and Yuyin Chen and Yida Wang and Xueyang Zhang and Yifei Zhan and Kun Zhan and Peng Jia and Xianpeng Lang and Xingang Wang and Wenjun Mei},
year={2024},
url={https://arxiv.org/abs/2411.19548}
}