Video Diffusion Models

Voices Powered byElevenlabs logo


Summarize the paper.

AI: The paper proposes and evaluates a diffusion model for video generation, conditioned on captions using BERT-large embeddings. The proposed diffusion model showed promising results and improvements over other models. The paper also introduces a new reconstruction-guided conditional sampling method that outperforms existing methods. The goal of this work is to advance research on methods in generative modeling while acknowledging the potential harmful impacts.

Generate a transcript of the audio summary that was generated by sciencecast for this paper.

AI: Transcript of the audio summary for this paper is not possible with the given answers as there are no answers related to the content of the paper.


Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, David J. Fleet


Generating temporally coherent high fidelity video is an important milestone in generative modeling research. We make progress towards this milestone by proposing a diffusion model for video generation that shows very promising initial results. Our model is a natural extension of the standard image diffusion architecture, and it enables jointly training from image and video data, which we find to reduce the variance of minibatch gradients and speed up optimization. To generate long and higher resolution videos we introduce a new conditional sampling technique for spatial and temporal video extension that performs better than previously proposed methods. We present the first results on a large text-conditioned video generation task, as well as state-of-the-art results on established benchmarks for video prediction and unconditional video generation. Supplementary material is available at

Follow Us on


Add comment
Recommended SciCasts
Graph Neural Bandits