Defending Against Adversarial Attacks in Artificial Intelligence Technologies

Publish Year: 1404
نوع سند: مقاله ژورنالی
زبان: English
View: 114

This Paper With 9 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

JR_ITRC-17-2_002

تاریخ نمایه سازی: 19 مرداد 1404

Abstract:

The rapid adoption of artificial intelligence (AI) technologies across diverse sectors has exposed vulnerabilities, particularly to adversarial attacks designed to deceive AI models by manipulating input data. This paper comprehensively reviews adversarial attacks, categorising them into training-phase and testing-phase types, with testing-phase attacks further divided into white-box and black-box categories. We explore defence mechanisms such as data modification, model enhancement, and auxiliary tools, focusing on the critical need for robust AI security in healthcare and autonomous systems sectors. Additionally, the paper highlights the role of AI in cybersecurity, offering a taxonomy for AI applications in threat detection, vulnerability assessment, and incident response. By analysing current defence strategies and outlining potential research directions, this paper aims to enhance the resilience of AI systems against adversarial threats, thereby strengthening AI's deployment in sensitive applications.