Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training

University of Rochester
*Indicates Equal Contribution

Complex Pose Results of Latent-Reframe

Abstract

Precise camera pose control is crucial for video generation with diffusion models. Existing methods require fine-tuning with additional datasets containing paired videos and camera pose annotations, which are both data-intensive and computationally costly, and can disrupt the pre-trained model’s distribution. We introduce Latent-Reframe, which enables camera control in a pre-trained video diffusion model without fine-tuning. Unlike existing methods, Latent-Reframe operates during the sampling stage, maintaining efficiency while preserving the original model distribution. Our approach reframes the latent code of video frames to align with the input camera trajectory through time-aware point clouds. Latent code inpainting and harmonization then refine the model's latent space, ensuring high-quality video generation. Experimental results demonstrate that Latent-Reframe achieves comparable or superior camera control precision and video quality to training-based methods, without the need for fine-tuning on additional datasets.

Method

Method

Basic Rotational Results of Latent-Reframe

Basic Translational Results of Latent-Reframe

Different Style Results of Latent-Reframe

BibTeX

@article{zhou2024latentreframe,
        title={Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training}, 
        author={Zhenghong Zhou and Jie An and Jiebo Luo},
        journal={arXiv preprint arXiv:2412.06029},
        year={2024},
  }