Skip to content

PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

Notifications You must be signed in to change notification settings

LabShuHangGU/PerlDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 

Repository files navigation

PerlDiff:Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

This repository is the official PyTorch implementation of PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models.

Controllable generation is considered a potentially vital approach to address the challenge of annotating 3D data, and the precision of such controllable generation becomes particularly imperative in the context of data production for autonomous driving. Existing methods focus on the integration of diverse generative information into controlling inputs, utilizing frameworks such as GLIGEN or ControlNet, to produce commendable outcomes in controllable generation. However, such approaches intrinsically restrict generation performance to the learning capacities of predefined network architectures. In this paper, we explore the integration of controlling information and introduce PerlDiff (Perspective-Layout Diffusion Models), a method for effective street view image generation that fully leverages perspective 3D geometric information. PerlDiff employs 3D geometric priors to guide the generation of street view images with precise object-level control within the network learning process, resulting in a more robust and controllable output. Moreover, it demonstrates superior controllability compared to alternative layout control methods. Empirical results justify that PerlDiff markedly enhances the precision of generation on the NuScenes and KITTI datasets.


PerlDiff:Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

Jinhua Zhang, Hualian Sheng, Sijia Cai, Bing Deng, Qiao Liang, Wen Li, Ying Fu, Jieping Ye, Shuhang Gu

Project Website arXiv GitHub Stars



PerlDiff utilizes perspective layout masking maps derived from BEV annotations to integrate scene information and object bounding boxes for multi view street scene generation

News

  • [8 July. 2024] ✨ Paper Released!

Code Usage

Codes will be coming soon!