Ethical Analysis of the Responsibility Gap in Artificial Intelligence

Publish Year: 1403
نوع سند: مقاله ژورنالی
زبان: English
View: 129

This Paper With 10 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

JR_IJETH-6-4_001

تاریخ نمایه سازی: 8 اسفند 1403

Abstract:

Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse events. This is because in addition to designers, the environment and users also participate in the development process. This ambiguity and complexity sometimes makes it seem that the output of these technologies is beyond the control of human individuals and that no one can be held responsible for it, which is known as the “responsibility gap”. In this article, the issue of the responsibility gap in artificial intelligence technologies will be explained and strategies for the responsible development of artificial intelligence that prevent such a gap from occurring as much as possible are presented. Material and Methods: The present article examined responsibility gap in AI. In order to achieve this goal, related articles and books were examined. Conclusion: There have been various responses to the issue of the responsibility gap. Some believe that society can hold the technology responsible for its outcomes. Others disagree. Accordingly, only the human actors involved in the development of these technologies can be held responsible, and they should be expected to use their freedom and awareness to shape the path of technological development in a way that prevents undesirable and unethical events. In summary, the three principles of routing, tracking, and engaging public opinion and attention to public emotions in policymaking can be useful as three effective strategies for the responsible development of AI technologies.

Authors

Eva Schur

Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria

Anna Brouns

Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria

Peter Lee

Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria

مراجع و منابع این Paper:

لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :
  • Rudy-Hiller F. The epistemic condition for moral responsibility. ۲nd ed. ...
  • J. M. &. R. M. Fischer, Responsibility and control: A ...
  • Haselager W F. Robotics, philosophy and the problems of autonomy. ...
  • Dennet D. Brainchildren: Essays on designing minds (representation and mind). ...
  • Sullins J P. When is a robot a moral agent? ...
  • Latour B. Pandora’s hope. Essays on the reality of science ...
  • Dennett D. When HAL kills, who’s to blame? in Stork, ...
  • نمایش کامل مراجع