Zongxia Li*, Xiyang Wu*, Yubin Qin, Hongyang Du, Guangyao Shi, Dinesh Manocha, Tianyi Zhou, Jordan Lee Boyd-Graber (* equal contribution)
arXiv 2025
Synthetic video generation with foundation models has gained attention for its realism and wide applications. While these models produce high-quality frames, they often fail to respect common sense and physical laws, resulting in abnormal content. Existing metrics like VideoScore emphasize general quality but ignore such violations and lack interpretability. A more insightful approach is using multi-modal large language models (MLLMs) as interpretable evaluators, as seen in FactScore. Yet, MLLMs' ability to detect abnormalities in synthetic videos remains underexplored. To address this, we introduce VideoHallu, a benchmark featuring synthetic videos from models like Veo2, Sora, and Kling, paired with expert-designed QA tasks solvable via human-level reasoning across various categories. We assess several SoTA MLLMs, including GPT-4o, Gemini-2.5-Pro, Qwen-2.5-VL, and newer models like Video-R1 and VideoChat-R1. Despite strong real-world performance on MVBench and MovieChat, these models still hallucinate on basic commonsense and physics tasks in synthetic settings, underscoring the challenge of hallucination. We further fine-tune SoTA MLLMs using Group Relative Policy Optimization (GRPO) on real and synthetic commonsense/physics data. Results show notable accuracy gains, especially with counterexample integration, advancing MLLMs' reasoning capabilities.
Zongxia Li*, Xiyang Wu*, Yubin Qin, Hongyang Du, Guangyao Shi, Dinesh Manocha, Tianyi Zhou, Jordan Lee Boyd-Graber (* equal contribution)
arXiv 2025
Synthetic video generation with foundation models has gained attention for its realism and wide applications. While these models produce high-quality frames, they often fail to respect common sense and physical laws, resulting in abnormal content. Existing metrics like VideoScore emphasize general quality but ignore such violations and lack interpretability. A more insightful approach is using multi-modal large language models (MLLMs) as interpretable evaluators, as seen in FactScore. Yet, MLLMs' ability to detect abnormalities in synthetic videos remains underexplored. To address this, we introduce VideoHallu, a benchmark featuring synthetic videos from models like Veo2, Sora, and Kling, paired with expert-designed QA tasks solvable via human-level reasoning across various categories. We assess several SoTA MLLMs, including GPT-4o, Gemini-2.5-Pro, Qwen-2.5-VL, and newer models like Video-R1 and VideoChat-R1. Despite strong real-world performance on MVBench and MovieChat, these models still hallucinate on basic commonsense and physics tasks in synthetic settings, underscoring the challenge of hallucination. We further fine-tune SoTA MLLMs using Group Relative Policy Optimization (GRPO) on real and synthetic commonsense/physics data. Results show notable accuracy gains, especially with counterexample integration, advancing MLLMs' reasoning capabilities.
Zongxia Li*, Xiyang Wu*, Hongyang Du, Huy Nghiem, Guangyao Shi (* equal contribution)
CVPR TMM-OpenWorld 2025 Oral
Multimodal Vision Language Models (vlms) have emerged as a transformative technology at the intersection of computer vision and natural language processing, enabling machines to perceive and reason about the world through both visual and textual modalities. For example, models such as CLIP, Claude, and GPT-4V demonstrate strong reasoning and understanding abilities on visual and textual data and beat classical single modality vision models on zero-shot classification. Despite their rapid advancements in research and growing popularity in applications, a comprehensive survey of existing studies on vlms is notably lacking, particularly for researchers aiming to leverage vlms in their specific domains. To this end, we provide a systematic overview of vlms in the following aspects: model information of the major vlms developed over the past five years (2019-2024); the main architectures and training methods of these vlms; summary and categorization of the popular benchmarks and evaluation metrics of vlms; the applications of vlms including embodied agents, robotics, and video generation; the challenges and issues faced by current vlms such as hallucination, fairness, and safety.
Zongxia Li*, Xiyang Wu*, Hongyang Du, Huy Nghiem, Guangyao Shi (* equal contribution)
CVPR TMM-OpenWorld 2025 Oral
Multimodal Vision Language Models (vlms) have emerged as a transformative technology at the intersection of computer vision and natural language processing, enabling machines to perceive and reason about the world through both visual and textual modalities. For example, models such as CLIP, Claude, and GPT-4V demonstrate strong reasoning and understanding abilities on visual and textual data and beat classical single modality vision models on zero-shot classification. Despite their rapid advancements in research and growing popularity in applications, a comprehensive survey of existing studies on vlms is notably lacking, particularly for researchers aiming to leverage vlms in their specific domains. To this end, we provide a systematic overview of vlms in the following aspects: model information of the major vlms developed over the past five years (2019-2024); the main architectures and training methods of these vlms; summary and categorization of the popular benchmarks and evaluation metrics of vlms; the applications of vlms including embodied agents, robotics, and video generation; the challenges and issues faced by current vlms such as hallucination, fairness, and safety.
Nithin Parepally, Ainesh Chatterjee, Auguste Gezalyan, Hongyang Du, Sukrit Mangla, Kenny Wu, Sarah Hwang, David Mount
40th International Symposium on Computational Geometry (SoCG) 2024
There are many structures, both classical and modern, involving convex polygonal geometries whose deeper understanding would be facilitated through interactive visualizations. The Ipe extensible drawing editor, developed by Otfried Cheong, is a widely used software system for generating geometric figures. One of its features is the capability to extend its functionality through programs called Ipelets. In this media submission, we showcase a collection of new Ipelets that construct a variety of geometric objects based on polygonal geometries. These include Macbeath regions, metric balls in the forward and reverse Funk distance, metric balls in the Hilbert metric, polar bodies, the minimum enclosing ball of a point set, and minimum spanning trees in both the Funk and Hilbert metrics. We also include a number of utilities on convex polygons, including union, intersection, subtraction, and Minkowski sum (previously implemented as a CGAL Ipelet).
Nithin Parepally, Ainesh Chatterjee, Auguste Gezalyan, Hongyang Du, Sukrit Mangla, Kenny Wu, Sarah Hwang, David Mount
40th International Symposium on Computational Geometry (SoCG) 2024
There are many structures, both classical and modern, involving convex polygonal geometries whose deeper understanding would be facilitated through interactive visualizations. The Ipe extensible drawing editor, developed by Otfried Cheong, is a widely used software system for generating geometric figures. One of its features is the capability to extend its functionality through programs called Ipelets. In this media submission, we showcase a collection of new Ipelets that construct a variety of geometric objects based on polygonal geometries. These include Macbeath regions, metric balls in the forward and reverse Funk distance, metric balls in the Hilbert metric, polar bodies, the minimum enclosing ball of a point set, and minimum spanning trees in both the Funk and Hilbert metrics. We also include a number of utilities on convex polygons, including union, intersection, subtraction, and Minkowski sum (previously implemented as a CGAL Ipelet).