Autonomous aerobatic flight is a challenging problem because many maneuvers are nonlinear and the aerodynamics are not always well understood. Skilled pilots overcome these challenges through years of flying experience but it is very difficult to translate that experience to an autonomous vehicle. The goal of this project is to emulate an air race in a laboratory setting in the hope of expanding control and autonomy paradigms to compensate for unknown system dynamics.
The unique feature of this project is the vehicle's capability to learn the optimum course trajectory through reinforcement learning (RL). RL entails an agent interacting with its environment and learning the desired behavior through feedback. Since flying is a safety-critical operation, predicting the lower bound on performance is pivotal. In addition, Multi-Fidelity Reinforcement Learning (MFRL), developed by ACL, could serve as an ideal framework for learning before implementation.