Multi-Task Feature Selection for Speech Emotion Recognition: Common Speaker-Independent Features Among Emotions

Publish Year: 1400
نوع سند: مقاله ژورنالی
زبان: English
View: 214

This Paper With 15 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

JR_JADM-9-3_001

تاریخ نمایه سازی: 18 مهر 1400

Abstract:

Feature selection is the one of the most important steps in designing speech emotion recognition systems. Because there is uncertainty as to which speech feature is related to which emotion, many features must be taken into account and, for this purpose, identifying the most discriminative features is necessary. In the interest of selecting appropriate emotion-related speech features, the current paper focuses on a multi-task approach. For this reason, the study considers each speaker as a task and proposes a multi-task objective function to select features. As a result, the proposed method chooses one set of speaker-independent features of which the selected features are discriminative in all emotion classes. Correspondingly, multi-class classifiers are utilized directly or binary classifications simply perform multi-class classifications. In addition, the present work employs two well-known datasets, the Berlin and Enterface. The experiments also applied the openSmile toolkit to extract more than ۶۵۰۰ features. After feature selection phase, the results illustrated that the proposed method selects the features which is common in the different runs. Also, the runtime of proposed method is the lowest in comparison to other methods. Finally, ۷ classifiers are employed and the best achieved performance is ۷۳.۷۶% for the Berlin dataset and ۷۲.۱۷% for the Enterface dataset, in the faced of a new speaker .These experimental results then show that the proposed method is superior to existing state-of-the-art methods.

Authors

E. Kalhor

Faculty of Computer Engineering and IT, Sadjad University of Technology, Mashhad, Iran.

B. Bakhtiari

Faculty of Computer Engineering and IT, Sadjad University of Technology, Mashhad, Iran.

مراجع و منابع این Paper:

لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :
  • France, D.J. et al.( ۲۰۰۰). “Acoustical properties of speech as ...
  • Chenchah, F. and Z. Lachiri.(۲۰۱۷). “A bio-inspired emotion recognition system ...
  • Harimi, A., Shahzadi, A., Ahmadyfard, A., & Yaghmaie, K. (۲۰۱۴). ...
  • Yuanchao, L. et al. (۲۰۱۷). “Emotion Recognition by Combining Prosody ...
  • Bahreini, K., R. Nadolski, and W. Westera. (۲۰۱۴). "Improved multimodal ...
  • Poria, S. et al. (۲۰۱۷). "A review of affective computing: ...
  • Yogesh, C. et al. (۲۰۱۷). "A new hybrid PSO assisted ...
  • Cummins, N. et al."Enhancing Speech-based Depression Detection through Gender Dependent ...
  • Yang, B. and M. Lugger. (۲۰۱۰). "Emotion recognition from speech ...
  • Kaya, H. and A.A. Karpov. (۲۰۱۸). "Efficient and effective strategies ...
  • Zhang, B., E.M. Provost, and G. Essl. “Cross-corpus acoustic emotion ...
  • Zhang, B., E.M. Provost, and G. Essl, (۲۰۱۷). “Cross-corpus acoustic ...
  • Zou, D. and J. Wang. (۲۰۱۵). “Speech Recognition Using Locality ...
  • Xu, X. et al. (۲۰۱۶). “Locally Discriminant Diffusion Projection and ...
  • Zhang, S., X. Zhao, and B. Lei. (۲۰۱۳). “Speech emotion ...
  • Charoendee, M., A. Suchato, and P. Punyabukkana. (۲۰۱۷). “Speech emotion ...
  • Xie, Z. and L. Guan. (۲۰۱۳). “Multimodal information fusion of ...
  • Gao, L. et al. (۲۰۱۴). “A fisher discriminant framework based ...
  • Özseven, T. (۲۰۱۹). “A novel feature selection method for speech ...
  • Özseven, T. (۲۰۱۸). “Investigation of the effect of spectrogram images ...
  • Peng, Z. et al. (۲۰۱۷). “Speech emotion recognition using multichannel ...
  • Nicolaou, M.A. et al. (۲۰۱۴). “Robust canonical correlation analysis: Audio-visual ...
  • Fu, J. et al. (۲۰۱۷). “Multimodal shared features learning for ...
  • Sarvestani, R.R. and R. Boostani,. (۲۰۱۷). “FF-SKPCCA: Kernel probabilistic canonical ...
  • Štruc, V. and F. Mihelic. (۲۰۱۰). “Multi-modal emotion recognition using ...
  • Kaya, H. et al. (۲۰۱۴). CCA-based feature selection with application ...
  • Yogesh, C. et al. (۲۰۱۷). Bispectral features and mean shift ...
  • Yogesh, C. et al. (۲۰۱۷). “Hybrid BBO_PSO and higher order ...
  • Yaacob, S., H. Muthusamy, and K. Polat. (۲۰۱۵). “Particle Swarm ...
  • Sun, Y. and G. Wen. (۲۰۱۵). “Emotion recognition using semi-supervised ...
  • Lugger, M. and B. Yang. (۲۰۰۷). “The relevance of voice ...
  • Schuller, B. et al. (۲۰۱۰). “Cross-corpus acoustic emotion recognition: Variances ...
  • Kotti, M., F. Paterno, and C. Kotropoulos. (۲۰۱۰). “Speaker-independent negative ...
  • Kotti, M. and F. Paternò. (۲۰۱۲). “Speaker-independent emotion recognition exploiting ...
  • Jin, Y. et al. (۲۰۱۴). “A feature selection and feature ...
  • Farrús, M. et al. (۲۰۰۷).” Histogram equalization in svm multimodal ...
  • Jiang, X. et al. (۲۰۱۷). “Emotion Recognition from Noisy Mandarin ...
  • Dang, T., V. Sethu, and E. Ambikairajah. (۲۰۱۶). Factor Analysis ...
  • Prince, S.J. and J.H. Elder. (۲۰۰۷). Probabilistic linear discriminant analysis ...
  • Zhang, Z., B. Wu, and B.r. Schuller. (۲۰۱۹). “Attention-augmented end-to-end ...
  • Li, Y., T. Zhao, and T. Kawahara. (۲۰۱۹). “Improved End-to-End ...
  • Yao, Z. et al. (۲۰۲۰). “Speech emotion recognition using fusion ...
  • Wang, C. et al. (۲۰۱۹). “Multi-Task Learning of Emotion Recognition ...
  • Obozinski, G., B. Taskar, and M. Jordan. (۲۰۰۶). “Multi-task feature ...
  • Liu, J., S. Ji, and J. Ye. (۲۰۰۹). “Multi-task feature ...
  • Sun, L. et al. (۲۰۰۹). “Efficient recovery of jointly sparse ...
  • Nie, F. et al. “Efficient and robust feature selection via ...
  • Tang, J. and H. Liu. (۲۰۱۲). “Unsupervised feature selection for ...
  • Argyriou, A., T. (۲۰۰۷). Evgeniou, and M. Pontil. “Multi-task feature ...
  • Nemirovski, A. (۲۰۰۴).” Interior point polynomial time methods in convex ...
  • Beck, A. and M. Teboulle. (۲۰۰۹).” A fast iterative shrinkage-thresholding ...
  • Burkhardt, F. et al. (۲۰۰۵). “A database of German emotional ...
  • Martin, O. et al. (۲۰۰۶). “The enterface’۰۵ audio-visual emotion database”. ...
  • Eyben, F., M. Wöllmer, and B. Schuller. (۲۰۱۰). “Opensmile: the ...
  • Huang, G.-B. et al. (۲۰۱۲). Extreme learning machine for regression ...
  • Zhang, R. et al. (۲۰۱۹). “Feature selection with multi-view data: ...
  • نمایش کامل مراجع