Computational Costs, Inherent Biases, and Security Vulnerabilities in Contemporary Language Models: A Critical Analysis

Publish Year: 1404
نوع سند: مقاله کنفرانسی
زبان: English
View: 69

This Paper With 10 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

ICPCONF11_140

تاریخ نمایه سازی: 1 آذر 1404

Abstract:

The emergence of transformer-based language models has fundamentally transformed the landscape of natural language processing applications. While models such as GPT and BERT have delivered remarkable breakthroughs in text generation, machine translation, and automated summarization, their widespread adoption has simultaneously exposed critical limitations that demand urgent attention. This study examines three interconnected challenges that currently plague large-scale language models: the substantial computational resources required for their operation, the persistent manifestation of social biases in their outputs, and the growing spectrum of security threats they face. Through detailed analysis of recent research findings from eight pivotal studies, we explore how these challenges impact the practical deployment of language technologies and their broader societal implications. Our investigation reveals that addressing these issues requires coordinated efforts across multiple dimensions-from technical innovations in model architecture to comprehensive policy frameworks governing AI development.

Keywords:

Natural language processing (NLP) , Model architecture , Large-scale language models , GPT

Authors

Mohammad Mahdi Behnam Mehr

Department of Computer Engineering, Qo.C., Islamic Azad University, Qom, Iran

Ahmad Sharif

Department of Computer Engineering, Qo.C., Islamic Azad University, Qom, Iran