CAST: Modeling Visual State Transitions for Consistent Video Retrieval
Abstract
Consistent Video Retrieval addresses context-agnostic limitations in video content composition by introducing CAST, a context-aware state transition method that improves narrative coherence through state-conditioned visual history prediction.
As video content creation shifts toward long-form narratives, composing short clips into coherent storylines becomes increasingly important. However, prevailing retrieval formulations remain context-agnostic at inference time, prioritizing local semantic alignment while neglecting state and identity consistency. To address this structural limitation, we formalize the task of Consistent Video Retrieval (CVR) and introduce a diagnostic benchmark spanning YouCook2, COIN, and CrossTask. We propose CAST (Context-Aware State Transition), a lightweight, plug-and-play adapter compatible with diverse frozen vision-language embedding spaces. By predicting a state-conditioned residual update (Δ) from visual history, CAST introduces an explicit inductive bias for latent state evolution. Extensive experiments show that CAST improves performance on YouCook2 and CrossTask, remains competitive on COIN, and consistently outperforms zero-shot baselines across diverse foundation backbones. Furthermore, CAST provides a useful reranking signal for black-box video generation candidates (e.g., from Veo), promoting more temporally coherent continuations.
Community
Video retrieval systems often return clips that are semantically relevant but inconsistent with the ongoing procedural state or identity in a multi-step activity.
We introduce Consistent Video Retrieval (CVR), a benchmark designed to diagnose these failures.
We also propose CAST, a lightweight adapter that models latent visual state transitions on top of frozen vision-language embeddings.
CAST consistently improves retrieval consistency across datasets and backbones, and can even help select more coherent candidates in video generation pipelines.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VIRTUE: Versatile Video Retrieval Through Unified Embeddings (2026)
- VidVec: Unlocking Video MLLM Embeddings for Video-Text Retrieval (2026)
- SemanticMoments: Training-Free Motion Similarity via Third Moment Features (2026)
- PokeFusion Attention: Enhancing Reference-Free Style-Conditioned Generation (2026)
- Pix2Key: Controllable Open-Vocabulary Retrieval with Semantic Decomposition and Self-Supervised Visual Dictionary Learning (2026)
- Olaf-World: Orienting Latent Actions for Video World Modeling (2026)
- ITO: Images and Texts as One via Synergizing Multiple Alignment and Training-Time Fusion (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper