Quadrotor Low-Noise Path Planning Using Reinforcement Learning Optimization

Document Type : Research Article

Authors

Faculty of Electrical Engineering Department, Shahid Beheshti University, Tehran, Iran

Abstract

This paper applies the Q-learning method, a reinforcement learning (RL) technique, to a quadrotor for finding a low-noise trajectory while avoiding obstacles. The proposed method introduces a novel Q-value function, resulting in obtaining a 2D surface that preserves the features of the environment. Therefore, the path-finding problem in a 3D space is simplified into a 2D space problem. Since the data is obtained based on the pre-calculated 2D surface, online path planning in the presence of unpredictable environmental changes is handled with markedly reduced computational complexities, effectively resolving a significant challenge in this area. The Q-learning algorithm is developed by defining two cost functions to avoid obstacles and reduce the observers' perceived noise level. To find the noise Sound Pressure Level (SPL), the perceived noise model is derived through the Gutin equation. In addition, the Octomap 3D optimizer is used to map the obstacles. Compared to the related works, noise observers are used vertically and horizontally, leading to more accurate environmental details. Moreover, the proposed algorithm leads to global optimal paths and avoids local minimum points commonly produced by similar optimization approaches. Finally, the performance of the proposed methodology in path finding and noise reduction is further demonstrated via a practical example of a quadrotor.

Keywords

Main Subjects


Volume 3, Issue 2
Issue in progress
2024
  • Receive Date: 07 October 2024
  • Revise Date: 26 November 2024
  • Accept Date: 16 December 2024