Our proposed model, MAGI, inherits all the advantages of traditional patch-level autoregressive models.
We introduce MAGI, a hybrid video generation framework that combines masked modeling for intra-frame generation with causal modeling for next-frame generation.
We introduce MAGI, a hybrid video generation framework that combines masked modeling for intra-frame generation with causal modeling for next-frame generation.
Our key innovation, Complete Teacher Forcing (CTF), conditions masked frames on complete observation frames rather than masked ones (namely Masked Teacher Forcing, MTF), enabling a smooth transition from token-level (patch-level) to frame-level autoregressive generation.
CTF significantly outperforms MTF, achieving a +23% improvement in FVD scores on first-frame conditioned video prediction. To address issues like exposure bias, we employ targeted training strategies, setting a new benchmark in autoregressive video generation. Experiments show that MAGI can generate long, coherent video sequences exceeding 100 frames, even when trained on as few as 16 frames, highlighting its potential for scalable, high-quality video generation.
Our proposed model, MAGI, inherits all the advantages of traditional patch-level autoregressive models.
Masked Teacher Forcing (MTF) extends masked image generation to video prediction by using causal temporal attention but suffers from a training-inference gap due to high mask ratios during training.
To address this, we propose Complete Teacher Forcing (CTF), which conditions on unmasked frames during training to predict masked frames, bridging the gap more effectively.