Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training


Abstract

Precise camera pose control is crucial for video generation with diffusion models. Existing methods require fine-tuning with additional datasets containing paired videos and camera pose annotations, which are both data-intensive and computationally costly, and can disrupt the pre-trained model’s distribution. We introduce Latent-Reframe, which enables camera control in a pre-trained video diffusion model without fine-tuning. Unlike existing methods, Latent-Reframe operates during the sampling stage, maintaining efficiency while preserving the original model distribution. Our approach reframes the latent code of video frames to align with the input camera trajectory through time-aware point clouds. Latent code inpainting and harmonization then refine the model's latent space, ensuring high-quality video generation. Experimental results demonstrate that Latent-Reframe achieves comparable or superior camera control precision and video quality to training-based methods, without the need for fine-tuning on additional datasets.

Method

Method

Comparison with MotionCtrl and CameraCtrl

Integrating Latent-Reframe with Diverse
Pretrained Diffusion Models

Basic Rotational Results of Latent-Reframe

Basic Translational Results of Latent-Reframe

Different Style Results of Latent-Reframe