• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Argus: A Compact and Versatile Foundation Model for Vision.
Weiming Zhuang, Chen
Chen, Zhizhong Li, Sina Sajadmanesh, Jingtao Li, Jiabo Huang, Vikash Sehwag, Vivek Sharma, Hirotaka Shinozaki, Felan Carlo
Garcia, Yihao Zhan, Naohiro Adachi, Ryoji Eki, Michael Spranger, Peter Stone,
and Lingjuan Lyu.
In Conference on Computer Vision and Pattern Recognition, June 2025.
While existing vision and multi-modal foundation models can handle multiplecomputer vision tasks, they often suffer from significant limitations, includinghuge demand for data and computational resources during training and inconsistentperformance across vision tasks at deployment time. To address these challenges,we introduce Argus (The name comes from Argus Panoptes--a hundred-eyed giant with''all-seeing'' capability in Greek mythology), a compact and versatile visionfoundation model designed to support a wide range of vision tasks through aunified multitask architecture. Argus employs a two-stage training strategy: (i)multitask pretraining over core vision tasks with a shared backbone that includesa lightweight adapter to inject task-specific inductive biases, and (ii) scalableand efficient adaptation to new tasks by fine-tuning only the task-specificdecoders. Extensive evaluations demonstrate that Argus, despite its relativelycompact and training-efficient design of merely 100M backbone parameters (only13.6 percents of which are trained using 1.6M images), competes with and even surpassesmuch larger models. Compared to state-of-the-art foundation models, Argus notonly covers a broader set of vision tasks but also matches or outperforms themodels with similar sizes on 12 tasks. We expect that Argus will accelerate thereal-world adoption of vision foundation models in resource-constrainedscenarios.
@InProceedings{vfm_cvpr2025, author = {Weiming Zhuang and Chen Chen and Zhizhong Li and Sina Sajadmanesh and Jingtao Li and Jiabo Huang and Vikash Sehwag and Vivek Sharma and Hirotaka Shinozaki and Felan Carlo Garcia and Yihao Zhan and Naohiro Adachi and Ryoji Eki and Michael Spranger and Peter Stone and Lingjuan Lyu}, title = {Argus: A Compact and Versatile Foundation Model for Vision}, booktitle = {Conference on Computer Vision and Pattern Recognition}, year = {2025}, month = {June}, location = {Nashville, United States}, abstract = {While existing vision and multi-modal foundation models can handle multiple computer vision tasks, they often suffer from significant limitations, including huge demand for data and computational resources during training and inconsistent performance across vision tasks at deployment time. To address these challenges, we introduce Argus (The name comes from Argus Panoptes--a hundred-eyed giant with ''all-seeing'' capability in Greek mythology), a compact and versatile vision foundation model designed to support a wide range of vision tasks through a unified multitask architecture. Argus employs a two-stage training strategy: (i) multitask pretraining over core vision tasks with a shared backbone that includes a lightweight adapter to inject task-specific inductive biases, and (ii) scalable and efficient adaptation to new tasks by fine-tuning only the task-specific decoders. Extensive evaluations demonstrate that Argus, despite its relatively compact and training-efficient design of merely 100M backbone parameters (only 13.6 percents of which are trained using 1.6M images), competes with and even surpasses much larger models. Compared to state-of-the-art foundation models, Argus not only covers a broader set of vision tasks but also matches or outperforms the models with similar sizes on 12 tasks. We expect that Argus will accelerate the real-world adoption of vision foundation models in resource-constrained scenarios. }, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Mar 30, 2025 23:18:48