‹ Back to glossary

Bias (in AI)

beginner
Systematic errors in AI systems that lead to unfair outcomes for certain groups of people. Bias can come from training data (if the data underrepresents certain groups), from model design, or from how the system is deployed. The EU AI Act requires bias testing for high-risk systems.
Related Terms
Responsible AI Training Data
Questions?
AI Readiness Check All terms
The Native AI Briefing
European AI news, curated and fact-checked. Every 2–3 days. Free.