A Transformer-based Approach for Persian Text Chunking
Publish Year: 1401
Type: Journal paper
Language: English
View: 233
This Paper With 12 Page And PDF Format Ready To Download
- Certificate
- I'm the author of the paper
این Paper در بخشهای موضوعی زیر دسته بندی شده است:
Export:
Document National Code:
JR_JADM-10-3_007
Index date: 1 October 2022
A Transformer-based Approach for Persian Text Chunking abstract
Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, we propose using the state-of-the-art transformer-based contextualized models, namely BERT and XLM-RoBERTa, as the major structure of our models. Conditional Random Field (CRF), the combination of Bidirectional Long Short-Term Memory (BiLSTM) and CRF, and a simple dense layer are employed after the transformer-based models to enhance the model's performance in predicting chunk labels. Moreover, we provide a new dataset for noun phrase chunking in Persian which includes annotated data of Persian news text. Our experiments reveal that XLM-RoBERTa achieves the best performance between all the architectures tried on the proposed dataset. The results also show that using a single CRF layer would yield better results than a dense layer and even the combination of BiLSTM and CRF.
A Transformer-based Approach for Persian Text Chunking Keywords:
A Transformer-based Approach for Persian Text Chunking authors
P. Kavehzadeh
Computer Engineering Department, Amirkabir University of Technology, Tehran, Iran.
M. M. Abdollah Pour
Computer Engineering Department, Amirkabir University of Technology, Tehran, Iran.
S. Momtazi
Computer Engineering Department, Amirkabir University of Technology, Tehran, Iran.
مراجع و منابع این Paper:
لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :