Neuro-temporal signature of low-level features in human object vision: an MEG study

Publish Year: 1397
نوع سند: مقاله کنفرانسی
زبان: English
View: 498

نسخه کامل این Paper ارائه نشده است و در دسترس نمی باشد

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

HBMCMED05_010

تاریخ نمایه سازی: 1 دی 1397

Abstract:

BackgroundObject recognition occurs in a fraction of second in human brain requiring a complex neural structure that processes low-level to high level semantic information. Despite of this human feat, object recognition is still a challenge in machine vision. Here, we studied the neuro-temporal trace of low-level features in human vision. We extracted three low-level features including Gabor, canny edges and Hough descriptors from thestimuli presented to participants during MEG recoding. The time courses resulted from spearman correlation between each of the visual features and MEG decoding accuracy RDMs estimated by multivariate pattern analysis (MVPA) demonstrates that although all the features are sustained, Hough descriptor have higher peak value and explains the neural data better than other two visual features. 2. Method The MEG data for this study is provided by a research previously done and published (Cichy, Pantazis et al., 2014). During this MEG recording, 92 real-world images of six categories (human and non-human bodies and faces, natural and artificial images) presented for 500 ms every 1.5-2 seconds to the participants (N=16). Multivariate pattern analysis (MVPA): To decode the neural information from MEG data, we trained a linear support vector machine (SVM) classifier to discriminate each pair of stimuli regarding MEG data at each time point. to reduce noise, we permuted the trials randomly and combined them to K=4 groups of 10 trials then averaged the trials in each group. We used Leave one out policy for training and testing SVM classifier. The accuracy of classifier is used as a measure of dissimilarity between the pair of the stimuli and used to populate a 92 x 92 representational dissimilarity matrix (RDM). Visual features: We extracted three low-level features commonly used in image processing from the stimuli. We estimated edges and lines using Canny edge detection and Hough transform MATLAB functions. Gabor escriptors are extracted based on (Haghighat, Zonouz et al., 2015) in which Gabor filter bank is created by a 5 by 8 cell whose elements are a Gabor filter represented by 39 by 39 matrices. Applying these filters on each stimulus provides a column vector, consisting of the Gabor features of the image.Statistical testing: All the significant time points are found with non-parametric permutation statistical tests using cluster defining threshold P < 0.05, and corrected significance level P < 0.05 (N = 16). 3. Resultswe used MVPA on MEG data and calculated low-level features RDMs. Fig1 shows the time courses resulted from performing spearman correlation between MEG RDM at each time point and visual features RDMs. Color-coded solid lines above the time courses demonstrate the significant time points for each curve. As it shown, among all three features Hough features contains more neural information. Since Hough transformis done on edges, it captures the overall shapes and some semantic and categorical content than other mentioned features.4. Conclusions Using MVPA, we captured and compared the neurodynamic signature of three popular features in human vision. Our results confirmed that due to the more semantic information encoded in Hough transform, it can better explain the neural data. 5. ReferencesCichy, R. M., et al. (2014). Resolving human object recognition in space and time. Nature neuroscience 17(3): 455.Haghighat, M., et al. (2015). CloudID: Trustworthy cloud-based and cross-enterprise biometric identification. Expert Systems with Applications 42(21): 7905-7916.

Authors

Elaheh Hatamimajoumerd

Department of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran

Alireza Talebpour

Department of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran

Yalda Mohsenzadeh

Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA