Comparing brain-inspired artificial networks against human object recognition

Publish Year: 1398
نوع سند: مقاله کنفرانسی
زبان: English
View: 583

نسخه کامل این Paper ارائه نشده است و در دسترس نمی باشد

  • Certificate
  • من نویسنده این مقاله هستم

این Paper در بخشهای موضوعی زیر دسته بندی شده است:

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

NSCMED08_518

تاریخ نمایه سازی: 15 دی 1398

Abstract:

Background and Aim : Simulation of human visual perception stands out amongst the most important purposes of artificial intelligence. Artificial neural network, a computational model of biological neural network, is assumed as being highly inspired by human visual representation. Thus, understanding the cortical object representation would lead us to ameliorate deep neural network structures. Understanding the mechanisms of representation of objects in visual ventral pathway is achieved by analyzing the responses of neurons across temporal ventral stream. Layers of a deep neural network are claimed to resemble ventral pathway; theories posit that the DNN first layers and last layers are analogous to Retinal Ganglion Cells and Anterior Inferior Temporal respectively. Nevertheless, in theory they work; in practice, some of them fail.Methods : The main focus of this study is comparison of stages of cortical object representation with neuroscience-inspired deep neural network layers with human object recognition performance. To this end, a dataset comprising of two groups of animate and inamitate line-drawings are divided into three groups: COMP, EDGE, and VERT. COMP group consist of intact line-drawing of animate and inamitate objects, while EDGE and VERT groups are fragmented images of the same objects with only edge and vertex information respectively. On one hand, the stimuli have been shown to sixteen 8 to 12 year-olds and they were asked to name the image for each of the three types. On the other hand, the stimuli were fed into different deep neural networks in order to compare their accuracies to human performance.Results : Human performance is the best in COMP and the worst in EDGE. However, in the brain-inspired artificial network, not only COMP recognition accuracy could not surpass human performance, but also quite unlike human performance, VERT and EDGE drawing accuracies did not follow the pattern of human performance. This result shows that different networks have distinct preferences in image recognition through different layers of networks. Our observation suggests that DNNs may not function exactly as humans and there is a great chance to efficiently alter those regarding new parameters.Conclusion : Although being widely used, deep neural networks could not mimic the human visual perception precisely since they fail in some simple tasks done by human, such as line-drawing object recognition.

Authors

Niloufar Shahdoust

School of Electrical and Computer Engineering, University of Tehan, Tehran, Iran

Mohammad Reza A.Dehaqani

School of Electrical and Computer Engineering, University of Tehan, Tehran, Iran

Babak Nadjar Araabi

School of Electrical and Computer Engineering, University of Tehan, Tehran, Iran