YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Time-to-Move

Training-Free Motion-Controlled Video Generation via Dual-Clock Denoising

Assaf Singerโ€  ยท Noam Rotsteinโ€  ยท Amir Mann ยท Ron Kimmel ยท Or Litany

โ€  Equal contribution

Project Page Arxiv Paper


Warpedโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ Oursโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ Warpedโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ Ours

Table of Contents

Inference

Time-to-Move (TTM) is a plug-and-play technique that can be integrated into any image-to-video diffusion model. We provide implementations for Wan 2.2, CogVideoX, and Stable Video Diffusion (SVD). As expected, the stronger the base model, the better the resulting videos. Adapting TTM to new models and pipelines is straightforward and can typically be done in just a few hours. We recommend using Wan, which generally produces higherโ€‘quality results and adheres more faithfully to userโ€‘provided motion signals.

For each model, you can use the included examples or create your own as described in Generate Your Own Cut-and-Drag Examples.

Dual Clock Denoising

TTM depends on two hyperparameters that start different regions at different noise depths. In practice, we do not pass tweak and tstrong as raw timesteps. Instead we pass tweak-index and tstrong-index, which indicate the iteration at which each denoising phase begins out of the total num_inference_steps (50 for all models). Constraints: 0 โ‰ค tweak-index โ‰ค tstrong-index โ‰ค num_inference_steps.

  • tweak-index โ€” when the denoising process outside the mask begins.
    • Too low: scene deformations, object duplication, or unintended camera motion.
    • Too high: regions outside the mask look static (e.g., non-moving backgrounds).
  • tstrong-index โ€” when the denoising process within the mask begins. In our experience, this depends on mask size and mask quality.
    • Too low: object may drift from the intended path.
    • Too high: object may look rigid or over-constrained.

Wan

To set up the environment for running Wan 2.2, follow the installation instructions in the official Wan 2.2 repository. Our implementation builds on the ๐Ÿค— Diffusers Wan I2V pipeline adapted for TTM using the I2V 14B backbone.

Run inference (using the included Wan examples):

python run_wan.py \
  --input-path "./examples/cutdrag_wan_Monkey" \
  --output-path "./outputs/wan_monkey.mp4" \
  --tweak-index 3 \
  --tstrong-index 7

Good starting points:

  • Cut-and-Drag: tweak-index=3, tstrong-index=7
  • Camera control: tweak-index=2, tstrong-index=5

CogVideoX

To set up the environment for running CogVideoX, follow the installation instructions in the official CogVideoX repository. Our implementation builds on the ๐Ÿค— Diffusers CogVideoX I2V pipeline, which we adapt for Time-to-Move (TTM) using the CogVideoX-I2V 5B backbone.

Run inference (on the included 49-frame CogVideoX example):

python run_cog.py \
  --input-path "./examples/cutdrag_cog_Monkey" \
  --output-path "./outputs/cog_monkey.mp4" \
  --tweak-index 4 \
  --tstrong-index 9

Stable Video Diffusion

To set up the environment for running SVD, follow the installation instructions in the official SVD repository.
Our implementation builds on the ๐Ÿค— Diffusers SVD I2V pipeline, which we adapt for Time-to-Move (TTM).

To run inference (on the included 21-frame SVD example):

python run_svd.py \
  --input-path "./examples/cutdrag_svd_Fish" \
  --output-path "./outputs/svd_fish.mp4" \
  --tweak-index 16 \
  --tstrong-index 21

Generate Your Own Cut-and-Drag Examples

We provide an easy-to-use GUI for creating cut-and-drag examples that can later be used for video generation in Time-to-Move. We recommend reading the GUI guide before using it.

Cut-and-Drag GUI Example

To get started quickly, create a new environment and run:

pip install PySide6 opencv-python numpy imageio imageio-ffmpeg
python GUIs/cut_and_drag.py

TODO ๐Ÿ› ๏ธ

  • Wan 2.2 run code
  • CogVideoX run code
  • SVD run code
  • Cut-and-Drag examples
  • Camera-control examples
  • Cut-and-Drag GUI
  • Cut-and-Drag GUI guide
  • Evaluation code

BibTeX

@misc{singer2025timetomovetrainingfreemotioncontrolled,
      title={Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising}, 
      author={Assaf Singer and Noam Rotstein and Amir Mann and Ron Kimmel and Or Litany},
      year={2025},
      eprint={2511.08633},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.08633}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support