Applying Domain Layer Normalization on Long-Short Term Memory network for adaptive Classification

Publish Year: 1402
نوع سند: مقاله کنفرانسی
زبان: English
View: 41

متن کامل این Paper منتشر نشده است و فقط به صورت چکیده یا چکیده مبسوط در پایگاه موجود می باشد.
توضیح: معمولا کلیه مقالاتی که کمتر از ۵ صفحه باشند در پایگاه سیویلیکا اصل Paper (فول تکست) محسوب نمی شوند و فقط کاربران عضو بدون کسر اعتبار می توانند فایل آنها را دریافت نمایند.

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

AISOFT01_018

تاریخ نمایه سازی: 28 بهمن 1402

Abstract:

Recurrent based classification models often face challenges in achieving high performance when provided with a limited amount of data. In such scenarios, if we have data for different related tasks, multi-task learning could be a useful approach for leveraging their commonalities and differences to enhance the overall performance. Many existing works in multi-task classification apply complex architecture for sharing related information and handling differences. In this study, we introduce an innovative approach to address this issue by incorporating layer normalization into the model’s design, enabling effective control of information flow. We refer to our model as Domain Layer Norm Classifier (DLNC). Additionally, we define task-specific embeddings to further characterize each task and enhance performance. Despite its simplicity and lightweight design, our model demonstrates robust generalization capabilities, even when trained on limited data. To validate our approach, we conduct fine-tuning experiment on a new dataset, highlighting the adaptability and effectiveness of our model.

Authors

Mohammad Amin Ghasemi

Department of Mathematics andComputer ScienceAmirkabir University of TechnologyTehran, Iran