‹ Back to glossary

Transformer

advanced
The neural network architecture behind all modern large language models. Introduced by Google in 2017, the transformer uses "attention mechanisms" to understand relationships between words regardless of their distance in a text. This breakthrough enabled the creation of GPT, BERT, Claude, and all other modern AI models.
Related Terms
Deep Learning Large Language Model (LLM)
Questions?
AI Readiness Check All terms
The Native AI Briefing
European AI news, curated and fact-checked. Every 2–3 days. Free.