Revisiting Compositionality in Dual-Encoder Vision-Language Models: The Role of Inference
View PDF HTML (experimental) Abstract:Dual-encoder Vision-Language Models (VLMs) such as CLIP are often characterized as bag-of-words systems due to their poor performance on compositional benchmarks. We argue that this limitation may stem less from deficient representations than from the standard inference protocol based on global cosine similarity. First, through controlled diagnostic experiments, we show that explicitly enforcing fine-grained region-segment alignment at inference dramatically improves compositional performance without updating pretrained encoders. We then introduce a lightweight transformer that learns such alignments directly from frozen patch and token embeddings. Comparing against full fine-tuning and prior end-to-end compositional training methods, we find that although these approaches improve in-domain retrieval, their gains do not consistently transfer under distribution shift. In contrast, learning localized alignment over frozen representations matches full fine-tuning on in-domain retrieval while yielding substantial improvements on controlled out-of-domain compositional benchmarks. These results identify global embedding matching as a key bottleneck in dual-encoder VLMs and highlight the importance of alignment mechanisms for robust compositional generalization. Subjects: Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2604.11496 [cs.CV] (or arXiv:2604.11496v2 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2604.11496 arXiv-issued DOI via DataCite Submission history From: Imanol Miranda [view email] [v1] Mon, 13 Apr 2026 14:03:18 UTC (467 KB) [v2] Thu, 16 Apr 2026 10:51:43 UTC (467 KB)
No replies yet. Be first.