‹ Back to glossary

Explainability (XAI)

intermediate
The ability to understand and explain how an AI system reaches its decisions. Required by the EU AI Act for high-risk systems: users must be able to understand why an AI made a specific recommendation or decision. Techniques include feature importance, attention visualization, and counterfactual explanations.
Related Terms
High-Risk AI AI Governance
Questions?
AI Readiness Check All terms
The Native AI Briefing
European AI news, curated and fact-checked. Every 2–3 days. Free.