From Fine-Tuning to Prompting: A Review of Adaptation Strategies for Large Language Models

Publish Year: 1404
نوع سند: مقاله کنفرانسی
زبان: English
View: 6

This Paper With 9 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

INDEXCONF08_049

تاریخ نمایه سازی: 20 بهمن 1404

Abstract:

Large language models have changed artificial intelligence by showing surprising skills - like learning from context. But their size, sometimes with hundreds of billions of parts, makes them tough to use for specific tasks. Training every part isn't practical - it takes too much power and space. So people look for smarter ways to adapt these systems. This article breaks down main methods: training on labeled data or instruction-following examples to shape behavior; using prompts instead of changing weights; and tweaking just a few parameters to save resources. We compare how each handles memory use, efficiency, downsides like forgetting past knowledge, and picking up bad shortcuts. We also look into newer methods - like RetICL, GOP, or how PEFT works in shifting setups such as Fed LLM and PECFT. In the end, we highlight key open challenges while pointing toward what's next, especially auto-building PEFT systems and creating stable, checkable logic paths.

Authors

Armin Janatisefat

Bachelor of Science Student in Computer Engineering, Islamic Azad University, West Tehran Branch, Tehran

Armin Tahamtan

Professor at Islamic Azad University, Tehran