The Safe Autonomous Flight Termination (SAFETERM) system aims at enhancing current Medium Altitude Long Endurance (MALE) and large tactical Remotely Piloted Aircraft System (RPAS) Flight Termination Systems and the overall Emergency Recovery (ER) concept. The SAFETERM system will explore the use of state-of-the-art Artificial Intelligence/Machine Learning technologies to increase the level of safety in specific emergency situations leading to a flight termination. This concept is particularly beneficial for emergencies occurring in Loss of Command and Control (C2) Datalink mode. The system aims to provide tools for the RPA to determine the Flight Termination Areas (FTAs) where the risk to third parties can be kept to a minimum. The foloowing image shows how the team has evaluated the robustness tests made on real footage.
SAFETERM consortium has assessed various machine-learning architectures focused on image recognition technology for the determination of safe Alternative Flight Termination Areas (AFTA). The resulting algorithm was initially trained with synthetic data obtained with a COTS Modelling and Simulation Software which was adapted to ATLAS flight area, where the data collection flight campaigns are taking place. The test campaigns will also take place in these facilities. In latter stages of the project, the Consortium has trained the system with combinations of Synthetic and Real data, in order to measure the impact of different data sources in the training for Learning Assurance and AI Trustworthiness purposes. In the following image, it is possible to see the different dataset combinations, used along the whole development of the project. First box corresponds to the initial assessment of the system architecture, where the performance, robustness and completeness were assessed on synthetic data only. Then, the second box focuses on checking the synthetic data matches the features present in real world images. Finally, the third box focuses on analysing the impact in generalization capacity for the model, trained with different proportions of real and synthetic data, over real data test sets. This process allows to check the needed effort and the amount of data required for having a well-performing system at the lowest data cost.
The third box of the previous image is based on traditional training, which would be the main production mean for an initial version of a COTS SAFETERM system. Then, it should be adapted to the deployment environment, by using transfer learning techniques, in order to reduce the data cost of adapting the system to a new scenario. Robustness, representativeness and completeness are well defined and measured in the different available reports. The Consortium has designed data collection Flight Campaigns, performed by the TARSIS-75 RPAS, whose optical sensors where the ones modelled for the simulation environment.
Flight Termination
RPAS are equipped with a set of sensors which provide the Flight Management System both the flight variables and environmental information needed to perform a safe flight, i.e: RPA position, airspeed, attitude, surrounding air traffic, EO/IR visual sensors. These sensors are inherent to an RPAS, independently of its Flight Termination System typology.
Architecture
SAFETERM is a slave system of a State Machine and it will be triggered in the case of the RPA not being able to reach any predetermined FTA. The master state machine may be the Automation & Emergency Recovery, the Flight Warning Control System or other suitable system on-board the RPA. This master system must have access to the RPA flight plan information and must transfer No-Fly areas information to SAFETERM, for it to consider them when assessing Alternative Flight Termination Areas (AFTA). Likewise, SAFETERM must report its own internal status to the master system. Another functionality is to communicate the RPA intention to other air space users and Air Traffic Controllers (ATC). To this end, SAFETERM will also have interfaces with on-board ATC radio and Transponder. SAFETERM exchanges information with other RPA systems in order to obtain the data needed to fulfil the mission objectives. This includes the flight information and the visual data coming from the cameras, which allow the AI algorithms to locate potential Alternative Flight Termination Areas.
In the latter stage of the project, after validating results on simulation environment, the system has been implemented into an embeddable architecture, combining a GPU which shall hold the video acquisition and the ML algorithm, completely separated from the rule-base algorithms of consolidation and autopilot comms. This is mainly done for safety reasons which are also aligned with certification constraints.
SAFETERM IS BUILT AS A PARTITIONED AVIONICS SOFTWARE WITH TIME AND SPACE SEPARATION IN ORDER TO ISOLATE THE DIFFERENT FUNCTIONS OF THE SW
In the latter stage of the project, after validating results on simulation environment, the system has been implemented into an embeddable architecture, combining a GPU which shall hold the video acquisition and the ML algorithm, completely separated from the rule-base algorithms of consolidation and autopilot comms. This is mainly done for safety reasons which are also aligned with certification constraints
The GPU in an embeddable board, which improves the execution time and the performance of the Machine Learning Module. The Machine Learning Classifier partition runs a class detection algorithm. Every incoming crop coming the video middleware is categorized with a set of class probabilities, which is later consolidated to produce a segmentation map of the whole image. Then, the rule-based image processing partition is also responsible for locating the centroid of the acceptable areas and for establishing the landing spot coordinates. I/O manages the data exchange from SAFETERM to the outer elements and it also acquires the predictions from ML Classifier, adding an extra layer of safety to the system, as it shall be the firewall in case the result of the ML module is corrupted. The video middleware processes the input video coming from the RPAS Optical Sensors and it generates the data packages needed as input by the ML partition, as shown in the next figure:
Certification and Standardization
SAFETERM needs to be certified as a critical system on-board an RPA. An assessment of the certification needs has been conducted in the scope of the project. This assessment was conducted following a conventional aviation certification approach and identified potential areas in which the certification basis must be adapted to cope with AI/ML based systems. The latest version of the Certification and Standarization report is available to be downloaded on this webpage, documents section.
Results
In the following image it is possible to see the results obtained from the different tests and combinations of data performed during the last development phase. The best performing ML Classifier for both IR and EO are the ones implemented in the embedded version of SAFETERM, into the GPU and will be the one tested in real life deployment flight tests. The graphs show the benefit of small amounts of real data for generalization purposes with respect to the system only trained with synthetic data. They also show the low false positive rates achieved, which is the key metric as it is the one representing AFTAs detected on forbidden areas. Although a vast majority of those are due to undetected roads (usually thin agricultural paths between forests or crop lands), it is a metric to be considered. In addition, the Hit Rate shows the percentage of valid landing spots detected which is quite high and thus means that generally the system will find suitable landing areas in nearly all situations.
In addition, the transfer learning between synthetic and real data has been checked. As seen, the increments with respect the performance of the preliminary Assessment (Green bar of the first graph), is substantial with very small datasets. Having a better initial base (a model training with real and synthetic data combined), would mean an outstanding performance for the system, at a low cost of data and adaptation time. In the next figure, two examples of Electro optical and Infra-red images are depicted for the reader to understand the inner steps performed by the system and how the ML partition works.
Proof of Concept
The proof of concept developed during phase 3 was successful. The system has proved its capacity for finding AFTAs over real world footage, and the new acceleration tools implemented for the latest stage of the project are going to be tested during the Test Flight Campaigns held in ATLAS flight centre in June 2022. The system has been used as well as use-case for the developments of the SAE WG-114 and it has been developed based on their outcomes and the EASA AI Guidelines latest versions.