Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks Article Swipe
Related Concepts
No concepts available.
Yang Liu
,
Bingjie Yan
,
Tianyuan Zou
,
Jianqing Zhang
,
Zhaohui Gu
,
Jianbing Ding
,
Xidong Wang
,
Jingyi Li
,
Xiaozhou Ye
,
Ye Ouyang
,
Qiang Yang
,
Ya-Qin Zhang
·
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2504.17421
· OA: W4415307357
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2504.17421
· OA: W4415307357
Large language models (LLMs) have demonstrated remarkable capabilities, but they require vast amounts of data and computational resources. In contrast, smaller models (SMs), while less powerful, can be more efficient and tailored to specific domains. In this position paper, we argue that taking a collaborative approach, where large and small models work synergistically, can accelerate the adaptation of LLMs to private domains and unlock new potential in AI. We explore various strategies for model collaboration and identify potential challenges and opportunities. Building upon this, we advocate for industry-driven research that prioritizes multi-objective benchmarks on real-world private datasets and applications.
Related Topics
Finding more related topics…