TY - BOOK AU - Moin, Hassan AU - Supervisor : Dr. Muhammad jawad khan TI - Enhanced Drone Control Using Reinforcement Learning U1 - 629.8 PY - 2022/// CY - Islamabad PB - SMME- NUST; KW - MS Robotics and Intelligent Machine Engineering N1 - Quadcopters have already proven their effectiveness in both civilian and military applications. Their control, however, is a difficult task due to their under-actuated, highly nonlinear, and coupled dynamics. Most quadcopter autopilot systems utilize cascaded control schemes, where the outer loop handles mission-level objectives in 3D Euclidean space, and the inner loop is responsible for stability and control. Such complex systems are generally operated using PID controllers, which have demonstrated exceptional performance in multiple scenarios, such as obstacle avoidance, trajectory tracking and path planning. However, tuning their gains for nonlinear systems using heuristics or rulebased methods is a tedious, time-consuming and difficult task. Rapid advances in the field of computational engineering, on the other hand, have paved way for intelligent flight control systems, which have become an important area of study addressing the limits of PID control, most recently through the application of reinforcement learning (RL). In this dissertation, an optimal gain auto-tuning strategy is implemented for altitude, attitude, and position controllers of a 6 DoF nonlinear drone system using a deep actor-critic RL algorithm having continuous observation and action spaces. The state equations are derived using Lagrange’s (energy-based) method, while the drone’s aerodynamic coefficients are estimated numerically using blade element momentum theory. Furthermore, the cascaded closed loop system’s asymptotic stability is studied using the theory of Lyapunov. Finally, the proposed strategy is validated by simulation results, where the gains learned by RL agents allow the quadcopter to track a given trajectory accurately. Moreover, these optimal gains satisfy the conditions obtained through Lyapunov’s stability analysis, indicating that the RL algorithm is an extremely powerful tool which can assess uncertainties existing within any complex nonlinear system UR - http://10.250.8.41:8080/xmlui/handle/123456789/29934 ER -