While its seeks to advance artificial intelligence (AI) applications in aviation, EASA wants to ensure that humans remain involved in and provide oversight for these developments, according to a just-released document. EASA's Issue 2 of its artificial intelligence concept paper presents foundational concepts that “are crucial for the safe and trustworthy development and implementation of AI technologies in aviation,” it said.
In the paper, EASA refines guidance for level 1 AI applications and provides guidance for level 2 AI-based systems. Level 1 AI applications are “those enhancing human capabilities” and they deepen “the exploration of learning assurance, AI explainability, and ethics-based assessment,” EASA explained.
“Level 2 AI introduces the groundbreaking concept of human-AI teaming, setting the stage for AI systems that automatically take decisions under human oversight.”
EASA released the concept paper to help those applying for certification of safety- or environment-related applications that will use AI or machine-learning technologies in areas covered by the EASA Basic Regulation. The agency published its "Artificial Intelligence Roadmap 2.0" in May, a living document that is updated regularly as AI development continues “through discussions and exchanges of views, but also, practical work on AI development in which the agency is already engaged.”