Retrv-R1: A Reasoning-Driven MLLM Framework for Universal and Efficient Multimodal Retrieval

1City University of Hong Kong, 2Tencent, 3Zhejiang University

Abstract

MY ALT TEXT

The success of DeepSeek-R1 demonstrates the immense potential of using reinforcement learning (RL) to enhance LLMs' reasoning capabilities. This paper introduces Retrv-R1, the first R1-style MLLM specifically designed for multimodal universal retrieval, achieving higher performance by employing step-by-step reasoning to produce more accurate retrieval results. We find that directly applying the methods of DeepSeek-R1 to retrieval tasks is not feasible, mainly due to (1) the high computational cost caused by the large token consumption required for multiple candidates with reasoning processes, and (2) the instability and suboptimal results when directly applying RL to train for retrieval tasks. To address these issues, Retrv-R1 introduces an information compression module with a details inspection mechanism, which enhances computational efficiency by reducing the number of tokens while ensuring that critical information for challenging candidates is preserved. Furthermore, a new training paradigm is proposed, including an activation stage using a retrieval-tailored synthetic CoT dataset for more effective optimization, followed by RL with a novel curriculum reward to improve both performance and efficiency. Incorporating these novel designs, Retrv-R1 achieves SOTA performance, high efficiency, and strong generalization ability, as demonstrated by experiments across multiple benchmarks and tasks.

Results

Results Table

Comparison results with other methods on M-BEIR test set. R@K refers to the Recall@K metric. qt, qi, ct and ci denote the text query, image query, text candidates and image candidates, respectively. Retrv-R1 achieves SOTA performance.

Examples

Examples Visual

Example queries and retrieval results illustrating the effectiveness of Retrv-R1.

Acknowledgment

The research was partially supported by the RGC General Research Fund 11200323, NSFC/RGC JRS Project N_CityU198/24. We thank Mr. Liqun Liu and Mr. Peng Shu from Tencent for their collaborations, insightful discussions, and support with computational resources in this work.

BibTeX

@article{zhu2025retrv,
  title={Retrv-R1: A Reasoning-Driven MLLM Framework for Universal and Efficient Multimodal Retrieval},
  author={Zhu, Lanyun and Ji, Deyi and Chen, Tianrun and Wu, Haiyang and Wang, Shiqi},
  journal={Advances in Neural Information Processing Systems},
  year={2025}
}