Fuzzy Radial Basis Function Least Square Policy Iteration: A Novel Critic-Only Reinforcement Learning Framework

Publish Year: 1404
نوع سند: مقاله ژورنالی
زبان: English
View: 183

This Paper With 22 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

JR_IJFS-22-2_004

تاریخ نمایه سازی: 29 اردیبهشت 1404

Abstract:

In this paper, a new form of critic-only Reinforcement Learning algorithm for continuous state spaces control problems is proposed. Our approach, called Fuzzy-RBF Least Square Policy Iteration (FRLSPI), tunes the weight parameters of the fuzzy-RBF network (a hybrid model constituted by combining Takagi-Sugeno fuzzy rule inference system with RBF network) online and is acquired through combining Least Squares Policy Iteration (LSPI) with fuzzy-RBF network as a function approximator. In FRLSPI, based on the basis functions defined in the fuzzy-RBF network, a solution has been provided for the challenge of determining the state-action basis functions in LSPI. We also provide positive theoretical results concerning an error bound between the optimal and the approximated Action Value Function (AVF) for FRLSPI. Our proposed method has suitable features such as positive mathematical analysis, learning rate independency and, comparatively good convergence properties. Simulation studies regarding the mountain-car control task and acrobat problem demonstrate the applicability and performance of our learning framework. The overall results indicate that the proposed idea can outperform previously known reinforcement learning algorithms.

Keywords:

Authors

Omid Mehrabi

Department of Electrical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

Ahmad Fakharian

No. ۳۵, ۱۲th Alley, Khiabani Street Anshenasan Highway Tehran

Mehdi Siahi

Department of Electrical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

Amin Ramezani

Department of Electrical and Computer Engineering, Tarbiat Modares University (TMU), Tehran, Iran.

مراجع و منابع این Paper:

لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :
  • D. Allahverdy, A. Fakharian, M. B. Menhaj, Back-stepping integral sliding ...
  • https://doi.org/۱۰.۱۰۰۷/s۴۲۸۳۵-۰۱۹-۰۰۲۵۷-z[۲] B. Andr´e, C. Anderson, Restricted gradient-descent algorithm for value-function ...
  • L. Bu¸soniu, D. Ernst, B. De Schutter, R. Babuˇska, Online ...
  • neunet.۲۰۱۷.۰۶.۰۰۷[۸] V. Derhami, V. J. Majd, M. N. Ahmadabadi, Fuzzy ...
  • K. S. Hwang, S. W. Tan, M. C. Tsai, Reinforcement ...
  • ۲۰۰۳.۸۱۱۱۱۲[۱۴] H. S. Jakab, L. Csat´o, Sparse approximations to value ...
  • https://doi.org/۱۰.۱۱۶۲/۱۵۳۲۴۴۳۰۴۱۸۲۷۹۰۷[۱۷] Y. J. Liu, L. Tang, S. Tong, C. P. ...
  • B. Saglam, C. Cicek, F. Multu, S. Kozat, Off-Policy correction ...
  • A. Sheikhlar, A. Fakharian, Online policy iteration-based tracking control of ...
  • J. Sherman, W. J. Morrison, Adjustment of an inverse matrix ...
  • heliyon.۲۰۲۴.e۳۰۶۹۷[۲۶] R. S. Sutton, A. G. Bareto, Reinforcement learning: An ...
  • https://doi.org/۱۰.۴۸۵۵۰/arXiv.۱۹۰۴.۰۳۵۳۵[۲۸] B. Varga, B. Kulcs´ar, M. H. Chehreghani, Deep Q-learning: ...
  • ۲۰۱۱.۲۱۶۸۴۲۲[۳۱] S. Yahyaa, B. Manderick, Knowledge gradient for online reinforcement ...
  • نمایش کامل مراجع