Large vision-language models (VLMs) often struggle to generate long and factual captions. However, traditional measures for hallucination and factuality are not well suited for evaluating longer, more diverse captions and in settings where ground-truth human-annotated captions are unavailable. We introduce OVFact, a novel method for measuring caption factuality of long captions that leverages openvocabulary visual grounding and tool-based verification without depending on human annotations. Our method improves agreement with human judgments and captures both caption descriptiveness (recall) and factual precision in the same metric. Furthermore, unlike previous metrics, our reference-free method design enables new applications towards factuality-based data filtering. We observe models trained on an OVFact-filtered (2.5-5x less) subset of a largescale, noisy (VLM-generated) pretraining set meaningfully improve factuality precision without sacrificing caption descriptiveness across a range of downstream long caption benchmarks.
Overview of OVFact. Our reference-free method assesses two aspects of long captioning – precision and descriptiveness (recall) – in a unified manner, without requiring ground-truth reference annotations. We process a model output caption into a set of candidate entities C, and assess which subset G is groundable in the input image with open-vocabulary detection and segmentation tools. Precision is then measured as a ratio of entities remaining (e.g. above“red blanket” and “white curtains”detected as hallucinated). To measure descriptiveness, we assess a large open-vocabulary concept set V and identify which concepts are grounded in the image R, then measure their recall within the candidate entities from the VLM C caption using maximum similarity scoring. Unlike prior work (Petryk et al., 2024; Rohrbach et al., 2018; Kaul et al., 2024), our method can be directly applied to assess factuality in settings where only model-generated caption outputs are available, such as data filtering.
@article{wysoczanska2025ovfact,
title={OVFact: Measuring and Improving Open-Vocabulary Factuality for Long Caption Models},
author={Wysocza{\'n}ska, Monika and Buch, Shyamal and Arnab, Anurag and Schmid, Cordelia},
journal={EMNLP Findings},
year={2025}
}