COMPUTER VISION

PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding

April 17, 2025

Abstract

Vision-language models are integral to computer vision research, yet many high-performing models remain closed-source, obscuring their data, design and training recipe. The research community has responded by using distillation from black-box models to label training data, achieving strong benchmark results, at the cost of measurable scientific progress. However, without knowing the details of the teacher model and its data sources, scientific progress remains difficult to measure. In this paper, we study building a Perception Language Model (PLM) in a fully open and reproducible framework for transparent research in image and video understanding. We analyze standard training pipelines without distillation from proprietary models and explore large-scale synthetic data to identify critical data gaps, particularly in detailed video understanding. To bridge these gaps, we release 2.8M human-labeled instances of fine-grained video question-answer pairs and spatio-temporally grounded video captions. Additionally, we introduce PLM–VideoBench, a suite for evaluating challenging video understanding tasks focusing on the ability to reason about "what", "where", "when", and "how" of a video. We make our work fully reproducible by providing data, training recipes, code & models.

Download the Paper

AUTHORS

Yazan:

Jang Hyun Cho

Andrea Madotto

Effrosyni Mavroudi

Triantafyllos Afouras

Tushar Nagarajan

Muhammad Maaz

Yale Song

Tengyu Ma

Shuming Hu

Hanoona Rasheed

Peize Sun

Po-Yao Huang

Daniel Bolya

Suyog Jain

Miguel Martin

Huiyu Wang

Nikhila Ravi

Shashank Jain

Tammy Stark

Shane Moon

Babak Damavandi

Vivian Lee

Andrew Westbury

Salman Khan

Philipp Krähenbühl

Piotr Dollar

Lorenzo Torresani

Kristen Grauman

Christoph Feichtenhofer

Yayıncı

arXiv

Research Topics

Computer Vision

Related Publications

April 17, 2025

COMPUTER VISION

Perception Encoder: The best visual embeddings are not at the output of the network

Daniel Bolya, Po-Yao Huang, Peize Sun, Jang Hyun Cho, Andrea Madotto, Chen Wei, Tengyu Ma, Jiale Zhi, Jathushan Rajasegaran, Hanoona Rasheed, Junke Wang, Marco Monteiro, Hu Xu, Shiyu Dong, Nikhila Ravi, Daniel Li (FAIR), Piotr Dollar, Christoph Feichtenhofer

April 17, 2025

April 14, 2025

RESEARCH

GRAPHICS

Autoregressive Distillation of Diffusion Transformers

Yeongmin Kim, Sotiris Anagnostidis, Yuming Du, Edgar Schoenfeld, Jonas Kohler, Markos Georgopoulos, Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu

April 14, 2025

March 30, 2025

COMPUTER VISION

Through-The-Mask: Mask-based Motion Trajectories for Image-to-Video Generation

Guy Yariv, Yuval Kirstain, Amit Zohar, Shelly Sheynin, Yaniv Taigman, Yossef (Yossi) Adi, Sagie Benaim, Adam Polyak

March 30, 2025

March 13, 2025

NLP

COMPUTER VISION

Subobject-level Image Tokenization

Delong Chen, Samuel Cahyawijaya, Jianfeng Liu, Baoyuan Wang, Pascale Fung

March 13, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.
OSZAR »