Learning Concepts from a Sequence of Experiences by Reinforcement Learning Agents

Publish Year: 1385
نوع سند: مقاله کنفرانسی
زبان: English
View: 1,627

This Paper With 8 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

ACCSI12_138

تاریخ نمایه سازی: 23 دی 1386

Abstract:

In this paper, we propose a novel approach whereby a reinforcement learning agent attempts to understand its environment via meaningful temporally extended concepts in an unsupervised way. Our approach is inspired by findings in neuroscience on the role of mirror neurons in action-based abstraction. Since there are so many cases in which the best decision cannot be made just by using instant sensory data, in this study we seek to achieve a framework for learning temporally extended concepts from sequences of sensory-action data. To direct the agent to gather fertile information for concept learning, a reinforcement learning mechanism utilizing experience of the agent is proposed. Experimental results demonstrate the capability of the proposed approach in retrieving meaningful concepts from the environment. The concepts and the way of defining them are thought such that they not only can be applied to ease decision making but also can be utilized in other applications as elaborated in the paper.

Authors

Farzad Rastegar

Control and Intelligent Processing Center of Excellence, Electrical and Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran

Majid Nili Ahmadabadi

Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran, School of Cognitive Sciences, Institute for studies in theoretical Physics and Mathematics, Niavaran, Tehran, Iran