CamReasoner: Reinforcing Camera Movement Understanding via Structured Spatial Reasoning
View PDF HTML (experimental) Abstract:Understanding camera dynamics is a fundamental pillar of video spatial intelligence. However, existing multimodal models predominantly treat this task as a black-box classification, often confusing physically distinct motions by relying on superficial visual patterns rather than geometric cues. We present \textbf{CamReasoner}, a framework that reformulates camera movement understanding as a structured inference process to bridge the gap between perception and cinematic logic. Our approach centers on the Observation-Thinking-Answer (O-T-A) paradigm, which compels the model to articulate spatio-temporal observations and reason about motion patterns within an explicit reasoning block. To instill this capability, we construct a Large-scale Inference Trajectory Suite comprising 18k SFT reasoning chains and 38k RL feedback samples. To the best of our knowledge, \textbf{we are the first to employ RL for logical alignment in camera movement understanding}, ensuring motion inferences are grounded in structured visual reasoning rather than contextual guesswork. Built upon Qwen2.5-VL-7B, CamReasoner-7B improves binary classification accuracy from 73.8\% to 78.4\% and VQA accuracy from 60.9\% to 74.5\% over its backbone, consistently outperforming both proprietary and open-source baselines across multiple benchmarks. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.00181 [cs.CV] (or arXiv:2602.00181v3 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2602.00181 arXiv-issued DOI via DataCite Submission history From: Hang Wu [view email] [v1] Fri, 30 Jan 2026 04:45:43 UTC (1,172 KB) [v2] Wed, 11 Feb 2026 17:26:00 UTC (1,161 KB) [v3] Tue, 14 Apr 2026 16:44:13 UTC (1,283 KB)
No replies yet. Be first.