Universal Avionics director of professional services Amanda Grizzard oversaw the October 9 launch of the company’s next Grand Challenge, allowing employees to form teams to solve big problems and develop products for the Tucson, Arizona-based avionics manufacturer. This time, the Grand Challenge asks employee teams to determine whether generative artificial intelligence (AI) like ChatGPT can lead to new products.
Grizzard participated in Universal’s first Grand Challenge in 2019 and gladly accepted the role of leading Grand Challenge 2.0. “The company loved it [last time],” she explained. “It was a lot of fun, even though [my team] didn’t get too far.”
The spur for the new challenge was Universal CEO Dror Yahav and his learning about the latest AI developments. “He wanted to know how we could use it,” Grizzard said. “[Employing] large-language models and seeing how easy it is for someone without skills to use generative AI to improve the company’s operational efficiency and whether it can be incorporated in new products.”
As it did with the first Grand Challenge, Universal has opened the second iteration to all employees. Those interested are forming teams and developing ideas for putting AI to work.
This event is different from the first challenge, which involved a hardware solution for improving flight management system (FMS) interfaces. In 2.0, the teams won’t have to develop hardware-based products but will have more free rein to explore AI-based solutions.
Currently, the plan is that teams continue to form and come up with ideas, and then submit those for initial approval for access to data needed for the AI application. On November 13, the proposal phase kicks off, and teams will have two weeks to finalize their submissions.
Judges will evaluate and select those that will move on to the next phase—developing applications—which begins on December 11. All applications need to be ready for submission by January 15, when judges will decide which teams advance. Finalists will demonstrate their products from February 6 to 8 and Universal will announce the winner on February 12.
The last Grand Challenge generated about a dozen proposals. This time, Yahav expects 10 to 15.
“For the first one, employees were really engaged,” Grizzard said, “and they came up with great ideas. This time, we’re going for scaleable applications and long-term opportunities. And we could have multiple solutions.”
The proposals will have to consider data security as they will rely on internal company resources. To avoid connections to the outside cyber world, participants will be given access to a secure internal network with all the information they need, as well as an internal AI trained only with Universal's information.
Participants will be given access to a guide that answers all their questions, provides links to resources, and gives areas of focus to consider for their solutions. This could include using AI for developing training materials, technical publications, chatbot services, and repair and maintenance processes.
“We pulled together research papers from the FAA, EASA, and universities,” Grizzard said, “and put them in a central location so they can look around and see what are the pain points and how they can solve them using AI.”
A guideline suggestion is to develop an AI-based system that captures the knowledge of Universal’s 42 years of FMS development. The company still gets calls for support for older units, but the people who designed, built, and supported those FMSs are no longer around.
“One guideline is to be able to train a neural network language model-based system to remember everything we’ve done,” said Yahav. “We have material for a company that has a lot of legacy products, but they're somewhere [not easily accessible]. If we teach a system, we can present this knowledge.”
Once the winning solution or multiple solutions are chosen, they will be implemented, but all with an eye toward cybersecurity and protecting intellectual property (IP) ownership. This could be an issue because generative AI systems capture information from a variety of sources, often without permission.
Universal's approach addresses, for example, whether a writer who answers a question in a training module owns the IP for that response and how that response can be used in an AI framework.
“One benefit of when we identify how we want to use that information and having a locally hosted AI server [is that] we can influence the training,” Grizzard said. “We start with a base model we believe in and we influence that model.
“And we always have a human oversight aspect, making sure it’s doing things correctly, it’s ethical, and checking those boxes. We are the influencer, and there aren’t peripheral inputs. I really believe it can change how we work for the better.”
“What people are doing with ChatGPT is amazing,” Yahav concluded, but there are cases where generative AI “starts to make up stories, all kinds of bizarre stuff.”
The solution is to take advantage of the technology but support the results with references from where the AI model was trained. It can provide a link to why it came up with the answer, and human users can dig further to validate the information.
At some point, as the users gain more trust in a well-designed AI system, they won’t need to conduct as much background checking. Tthis is similar to how a quality-control system works in the non-AI world.
As Yahav explained, it’s like hiring a new quality-control inspector from a competing company. “Initially, you review their work carefully. The more you gain confidence, the less you check. In the end, the ultimate responsibility is on the manager. With AI, initially, we’ll double-check and after a while, check critical items, but the manager will always be responsible.”