SEO Title
FAA Publishes First Roadmap for Artificial Intelligence Safety Assurance
Subtitle
31-page document recommends AI safety assurance strategies
Subject Area
Teaser Text
The FAA has released the first iteration of its "Roadmap for Artificial Intelligence Safety Assurance," four years after EASA introduced its AI roadmap.
Content Body

The FAA has released the first iteration of its “Roadmap for Artificial Intelligence Safety Assurance,” a 31-page document outlining the U.S. air safety regulator’s approach to safely integrating novel AI technologies in aviation. In addition to making AI safe, the FAA also seeks to identify ways that AI can make the industry safer, according to the strategy document. 

To come up with its AI roadmap, the FAA consulted with industry officials and other regulatory agencies, including the European Union Aviation Safety Agency (EASA), which published its first AI roadmap in 2020. EASA released a revised and expanded AI Roadmap 2.0 in May 2023, and this year the agency published a concept paper with new guidelines for companies that intend to certify AI systems.

In its version of the AI roadmap, the FAA introduces a list of core principles that will guide its development of AI safety assurance methods. For example, it recommends regulators leverage existing aviation safety requirements and take an incremental, safety-focused approach to implementing AI—starting with lower-risk applications such as pilot aides to reduce workload and crew sizes.

The document also spells out some key actions that must be taken to enable the safe use of AI as well as the use of AI for safety enhancements. These include collaborating with industry and government agencies, educating and training the FAA’s workforce on AI technology, and conducting ongoing research to evaluate the effectiveness of its safety assurance methods.

One area where EASA and the FAA differ in their roadmaps is ethical considerations. The FAA’s document declares that “the treatment of the ethical use of AI is outside the scope of this roadmap,” and EASA wrote in its version that “the liability, ethical, social and societal dimension of AI should also be considered.” According to EASA, ethical guidelines are critical to ensuring AI trustworthiness and earning societal acceptance of AI in aviation and in general. 

While the FAA’s roadmap does not offer any ethical guidance directly, the document refers to recent legislation that addresses the matter, including Executive Order 14110 (“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”), which President Joe Biden issued in October 2023. 

“This roadmap is being developed within a broader, evolving national framework established for the safe, secure, and trustworthy development and use of AI, including in appropriate cases its adoption and regulation across the federal government,” the FAA document states.

EASA's roadmap offers an anticipated timeline for various phases of AI adoption, starting with pilot assistance and human-AI teaming this decade, with fully autonomous commercial airliners hitting the market around 2050. The FAA's roadmap, however, makes no speculations about the speed of AI's adoption or the timing of any AI-related milestones.

Both the FAA and EASA treat their respective roadmaps as “living documents” that the agencies periodically update as AI technology advances.

Expert Opinion
False
Ads Enabled
True
Used in Print
False
Writer(s) - Credited
Newsletter Headline
FAA Issues First Artificial Intelligence Roadmap
Newsletter Body

The FAA has released the first iteration of its “Roadmap for Artificial Intelligence Safety Assurance,” a 31-page document outlining the U.S. air safety regulator’s approach to safely integrating novel AI technologies in aviation. In addition to making AI safe, the FAA also seeks to identify ways that AI can make the industry safer, according to the strategy document. 

 

Solutions in Business Aviation
0
AIN Publication Date
----------------------------