Compliance

| October 17, 2025

Trustworthy AI for Aviation: a Human-Centric Roadmap

The roadmap defines incremental “levels” of AI capability, starting from simple automation support to human-AI collaboration and, eventually, limited autonomy. Crucially, EASA’s approach is human-centric: every AI system must enhance human decision-making, not replace it.

This direction sends a clear message: aviation can, and must, adopt AI responsibly, guided by the same principles that have kept our industry safe for decades. The next question is how this translates to real operations.

Operational augmentation, not wholesale replacement

One of the most important principles in EASA’s roadmap is that AI should augment, not automate, operational decision-making.

For example, AI tools can already support manual authoring and editing by suggesting more consistent phrasing, detecting outdated references, or summarizing regulatory updates, helping documentation teams work faster and with fewer errors. 

In maintenance and engineering, AI can assist in identifying patterns that predict component wear or failure earlier than traditional analytics. And in flight operations, AI can support trajectory optimization, safety reporting, or data classification, helping teams focus on decision-making rather than data crunching.

In all these cases, AI’s role is supportive: it streamlines human work without taking away the human judgment that underpins safety. EASA calls this Level 1 or 2 AI: systems that assist or collaborate with the human, never acting independently.

When implemented in this spirit, AI becomes a co-pilot for data and documentation, not a replacement for expertise, but an accelerator of it.

Workforce, culture & reskilling: the people side of AI adoption

AI adoption is not only about technology; it’s about people. Operators will succeed or fail based on how well they prepare their teams for this transition.

The first priority is to build AI-aware validation roles, people who understand both aviation context and how AI tools work, enabling them to test, verify, and approve outputs.

Second, organizations must invest in data stewardship and governance skills. Teams need to understand where data comes from, how it’s used, and how to protect its integrity and privacy.

Third, culture matters. Safety culture has always depended on healthy skepticism, and the same applies to AI. Operators must foster an environment where questioning AI outputs is encouraged, not penalized.

Finally, AI adoption is a change management process. Rolling out AI without training and context risks confusion and mistrust. Pilots, engineers, and document controllers need to see AI as an enabler of safety, not as a threat to their professionalism.

When people are included, trained, and empowered, AI integration becomes sustainable.

Governance, assurance & ethics: building trust at scale

If aviation has a competitive advantage in adopting AI, it’s in its governance. Our industry already has decades of experience with traceability, certification, and design assurance, and these same principles must guide AI.

For operators introducing AI-based tools, here’s a practical governance checklist grounded in EASA’s trustworthiness framework:

  • Data governance and access control: Clearly define which data sources are approved and maintain version control.
  • Human-in-the-loop validation: Require human sign-off for every AI-generated suggestion or change.
  • Traceability and audit logs: Record prompt history, model version, and decision trails.
  • Vendor assurance and model provenance: Understand where your models come from and what data they were trained on.
  • Testing and validation: Benchmark AI outputs against domain-expert baselines; regularly stress-test for corner cases.
  • Continuous monitoring: Track performance drift and implement rollback procedures when quality declines.
  • Ethical transparency: Communicate how AI is used and ensure bias checks and privacy safeguards are in place.

EASA emphasizes that AI must be explainable, predictable, and traceable, qualities that align perfectly with aviation’s safety culture. Operators that embed these principles early will find that AI integration strengthens, rather than challenges, compliance.

Practical first steps for operators

Every organization can start small and build a foundation for trustworthy AI.

  • Begin with low-risk use cases such as document search, summarization, or authoring assistance.
  • Establish an AI governance board that includes safety, compliance, and IT.
  • Define roles and accountability for validation and oversight.
  • Build audit logs and transparency mechanisms from day one.
  • Engage early with EASA’s innovation hub and national authorities to stay aligned with evolving guidance.
  • Incremental adoption, guided by governance, is the surest path to success.

Building Trust at Web Manuals

Although EASA’s roadmap was introduced more than two years ago, it remains the global reference point for safe AI adoption due to its iterative nature and human-centric focus. AI is not a revolution that replaces people; it’s an evolution that redefines how we work together. 

As the agency continues to update its guidance annually, operators can rely on its principles to guide their AI journeys today and ensure future regulatory compliance tomorrow.

This new era in aviation tests our ability to keep human expertise at the center of digital transformation, demanding traceability and accountability at every step. At Web Manuals, we’ve taken this philosophy to heart with the Amelia suite, designed not to replace aviation professionals but to empower them.

  • Amelia Co-Author supports document editors by suggesting text, summarizing updates, and aligning phrasing with regulatory standards, all within a controlled, traceable workflow where the human remains the final authority.
  • Amelia Document Search takes that same AI foundation and applies it to information retrieval: helping pilots, compliance managers, and flight ops teams quickly find descriptive, contextual answers across their manuals, while maintaining strict access control and version traceability.

In both cases, security is non-negotiable. Amelia only searches through documents that belong to your organization and never uses information from the internet. All data remains private and encrypted within Web Manuals’ secure infrastructure. 

Both tools demonstrate a principle that aviation has always understood: technology can enhance safety and efficiency if it serves human judgment, not the other way around.

These safeguards ensure Amelia enhances operational safety while maintaining the highest standards of data protection and user trust.

A Human-Centric Future for Aviation AI

By applying frameworks like EASA’s roadmap and adopting tools grounded in transparency and trust, we can build AI systems that serve and empower teams.

Remember that the EASA document describes itself as a “living document”, meaning the roadmap is not static. It continues to evolve alongside technological and operational advances. Its principles on trustworthiness, human oversight, transparency, and explainability are foundational to how EASA approaches certification and oversight of AI-based systems.

For aviation leaders, the message is clear: the plan is not about slowing innovation; it’s about embedding accountability and trust into every AI implementation. How is your organization approaching AI adoption in documentation?

By Paul Sandström, COO, Web Manuals

Table of Contents

Get started with a quick demo
Let us tell you more about our product and how it can help you
Related

Get compliant and streamline operations fast

Join over 750+ companies already loving Web Manuals.