InstrAct: Towards Action-Centric Understanding in Instructional Videos
View PDF HTML (experimental) Abstract:Understanding instructional videos requires recognizing fine-grained actions and modeling their temporal relations, which remains challenging for current Video Foundation Models (VFMs). This difficulty stems from noisy web supervision and a pervasive "static bias", where models rely on objects rather than motion cues. To address this, we propose InstrAction, a pretraining framework for instructional videos' action-centric representations. We first introduce a data-driven strategy, which filters noisy captions and generates action-centric hard negatives to disentangle actions from objects during contrastive learning. At the visual feature level, an Action Perceiver extracts motion-relevant tokens from redundant video encodings. Beyond contrastive learning, we introduce two auxiliary objectives: Dynamic Time Warping alignment (DTW-Align) for modeling sequential temporal structure, and Masked Action Modeling (MAM) for strengthening cross-modal grounding. Finally, we introduce the InstrAct Bench to evaluate action-centric understanding, where our method consistently outperforms state-of-the-art VFMs on semantic reasoning, procedural logic, and fine-grained retrieval tasks. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.08762 [cs.CV] (or arXiv:2604.08762v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2604.08762 arXiv-issued DOI via DataCite (pending registration) Submission history From: Zhuoyi Yang [view email] [v1] Thu, 9 Apr 2026 20:51:13 UTC (25,525 KB)
No replies yet. Be first.