HOI-R1: Exploring the Potential of Multimodal Large Language Models for Human-Object Interaction Detection
Paper
β’
2510.05609
β’
Published
HOI-R1 is inspired by recent advances in reinforcement learning for large language models and investigates how vision-language models can reason about and detect human-object interactions more effectively.
If you find this work useful, please consider citing:
@article{chen2025hoi,
title={HOI-R1: Exploring the Potential of Multimodal Large Language Models for Human-Object Interaction Detection},
author={Chen, Junwen and Xiong, Peilin and Yanai, Keiji},
journal={arXiv preprint arXiv:2510.05609},
year={2025}
}