arxiv
PublishedApril 24, 2026 at 4:00 AM
—neutral
Federated Co-tuning Framework for Large and Small Language Models
Publisher summary· verbatim
arXiv:2411.11707v3 Announce Type: replace-cross Abstract: By adapting Large Language Models (LLMs) to domain-specific tasks or enriching them with domain-specific knowledge, we can fully harness the capabilities of LLMs. Nonetheless, a gap persists in achieving simultaneous mutual enhancement betwee
Discussion
No replies yet. Be first.
Originally published on arxiv ↗