Datasets:

ArXiv:
License:
VSI-100k / README.md
L-Justice's picture
Update README.md
ac1212c verified
metadata
license: apache-2.0

Improved Visual-Spatial Reasoning via R1-Zero-Like Training

Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu, Zhenyu Yang, Zhijie Deng

πŸ“… News

  • πŸš€ [06/04/2025] We release VSI-100k.
  • πŸš€ [04/02/2025] We release our paper on arxiv.

🌞 Highlights

πŸ”” We identify that the visual-spatial reasoning capacities of small- to medium-sized Qwen2-VL models cannot be activated via Chain of Thought (CoT) prompts.

πŸ”” We incorporate GRPO training for improved visual-spatial reasoning, using the carefully curated VSI-100k dataset.

πŸ”” With GRPO training, our vsGRPO-2B outperforms GPT-4o, and the vsGRPO-7B demonstrates performance comparable to the best open-source model, LLaVA-Video-Next-72B.

πŸ€— VSI-100k

To combat data scarcity, we build VSI-100k. Specifically, with the ScanNet 3D annotation information, we construct approximately 100k question-answer pairs for the training.

Here we release the raw data for the community. Specifically, we split the question types into six categories:

We are releasing the raw data for the community. The question types have been categorized into seven distinct categories:

  • Absolute Distance: Given two unique objects in the scene, we provide the distance in meters between them.
  • Object Counting: The total number of objects present in the entire scene.
  • Object Size: The three dimensions of a unique object within the scene.
  • Relative Direction: Given the location of the observer and their viewpoint, we provide the relative direction of the target concerning the observer. Note that there are three types of answers, distinguished according to the VSI-bench method.
  • Relative Distance: For a given object, we list other objects in the scene from closest to farthest.
  • Room Size: The area of the room in the scene is provided in square meters.