An Efficient Autonomous Exploration for Unknown Environment Mapping via Deep Reinforcement Learning

Publish Year: 1404
نوع سند: مقاله کنفرانسی
زبان: English
View: 80

This Paper With 6 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

AEROSPACE23_239

تاریخ نمایه سازی: 28 مهر 1404

Abstract:

This research explores the integration of deep reinforcement learning (DRL) and attention mechanisms in exploration planning, aiming to enhance autonomous exploration and mapping in unknown environments. Traditional exploration strategies often struggle with inefficiencies, such as redundant revisits and suboptimal global paths. To overcome these challenges, a novel exploration sequence planner is proposed, which replaces the heuristic asymmetric traveling salesman problem (ATSP) solver in FAEP with a deep deterministic policy gradient (DDPG) method augmented with an attention mechanism. This approach enables the agent to dynamically learn efficient exploration sequences by considering spatial features of frontiers and its proximity to unexplored areas. Through simulation experiments, the proposed method demonstrates improvements in exploration efficiency, reducing exploration time and flight distance compared to state-of-the-art benchmarks, including FAEP and FUEL. The results underscore the potential of DRL and attention mechanisms in advancing autonomous robotic exploration, paving the way for the development of more intelligent and adaptive systems in various applications, such as surveying, rescue operations, and ۳D reconstruction.

Authors

Seid Hossein Pourtakdoust

Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran

Hadi Zare

Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran

Amir Hossein Khodabakhsh

Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran