SVOR (Stable Video Object Removal)
Official PyTorch code for From Ideal to Real: Stable Video Object Removal under Imperfect Conditions
β If SVOR is helpful to your projects, please help star this repo. Thanks! π€
Overview
Removing objects from videos remains difficult in the presence of real-world imperfections such as shadows, abrupt motion, and defective masks. Existing diffusion-based video inpainting models often struggle to maintain temporal stability and visual consistency under these challenges. We propose Stable Video Object Removal (SVOR), a robust framework that achieves shadow-free, flicker-free, and mask-defect-tolerant removal through three key designs: (1) Mask Union for Stable Erasure (MUSE), a windowed union strategy applied during temporal mask downsampling to preserve all target regions observed within each window, effectively handling abrupt motion and reducing missed removals; (2) Denoising-Aware Segmentation (DA-Seg), a lightweight segmentation head on a decoupled side branch equipped with {Denoising-Aware AdaLN } and trained with mask degradation to provide an internal diffusion-aware localization prior without affecting content generation; and (3) Curriculum Two-Stage Training: where Stage I performs self-supervised pretraining on unpaired real-background videos with online random masks to learn realistic background and temporal priors, and Stage II refines on synthetic pairs using mask degradation and side-effect-weighted losses, jointly removing objects and their associated shadows/reflections while improving cross-domain robustness. Extensive experiments show that SVOR attains new state-of-the-art results across multiple datasets and degraded-mask benchmarks, advancing video object removal from ideal settings toward real-world applications.
Results
For more visual results, go checkout our project page
Common Masks
| Masked Input | Result |
|---|---|
|
|
|
|
|
|
|
|
Defective Masks
| Masked Input | Result |
|---|---|
|
|
|
|
|
|
|
|
Dependencies and Installation
The code is tested with Python 3.10
Clone Repo
git clone https://github.com/xiaomi-research/SVOR.gitCreate Conda Environment and Install Dependencies
# create new anaconda env conda create -n svor python=3.10 -y conda activate svor # install pytorch and xformers pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 xformers==0.0.30 # install other python dependencies pip install -r requirements.txt[Optional] Install flash-attn, refer to flash-attention
pip install packaging ninja psutil pip install flash-attn==2.7.4.post1 --no-build-isolation
[Optional] Run with docker
docker build -f Dockerfile.ds -t SVOR:latest .
docker run --gpus all -it --rm -v /path/to/videos:/data -v /path/to/models:/root/models SVOR:latest
Pretrained Weights
Download pretrained weights and put them to models/:
- download Wan-AI/Wan2.1-VACE-1.3B
- download our trained two loras from HigherHu/SVOR
The files in models/ are as follows:
models/
βββ put models here.txt
βββ remove_model_stage1.safetensors
βββ remove_model_stage2.safetensors
βββ Wan2.1-VACE-1.3B/
Quick test
Run the following scripts, and results will be save to samples/SVOR/:
python predict_SVOR.py \
--input_video samples/input/bmx-bumps_raw.mp4 \
--input_mask_video samples/input/bmx-bumps_mask.mp4
Usage:
python predict_SVOR.py [options]
Some key options:
--input_video Path to input video
--input_mask_video Path to mask video
--num_inference_steps Inference steps (default: 20)
--save_dir Output directory
--sample_size Frame size: height width (default: 720 1280)
ATTENTION: It will need no less than 40GB GPU memory to run the inference.
Interactive Demo
Install SAM2 and download pretrained weights sam2.1_hiera_large.pt to
models/Start the gradio demo
python -m demo.gradio_appEnsure it print the following informations:
... [Info] SAM2 Predictor initialized successfully ... [Info] Removal model Predictor initialized successfully Running on local URL: http://0.0.0.0:7861Open the web page: http://[ServerIP]:7861
Usage 1. Upload a video and click "Process video" button in the "1. Upload and Preprocess" tab page 2. Switch to "2. Annotate and Propagate" tab page, click to segment the objects 3. "Add annotation" and "Propagate masks", to finish the segmentation 4. Check the object ID in "Display object list", and switch to "3. Remove Objects" tab page 5. Click "Preview video" to preview input video and mask video 6. Click "Start removal" to run the SVOR algorithm
RORD-50 Dataset
The RORD-50 Dataset can be downloaded from TBD
Acknowledgement
Our work benefit from the following open-source projects:
Citation
If you find our repo useful for your research, please consider citing our paper:
@article{hu2026svor,
title={From Ideal to Real: Stable Video Object Removal under Imperfect Conditions},
author={Hu, Jiagao and Chen, Yuxuan and Li, Fuhao and Wang, Zepeng and Wang, Fei and Zhou, Daiguo and Luan, Jian},
journal={arXiv preprint arXiv:2603.09283},
year={2026}
}
- Downloads last month
- 2
Model tree for HigherHu/SVOR
Base model
Wan-AI/Wan2.1-VACE-1.3B