Companion view
The LLM Evolutionary Tree.
A living lineage of frontier and open-weight language models, updated as new families branch and converge.
A phylogenetic family tree rooted at the Transformer paper that started it all (“Attention Is All You Need”, 2017). Recent years dominate the canvas. Rendered from a structured dataset of 200 models with cited sources, infrastructure signals, and explicit confidence levels. Companion to The AI Stack Weekly.
Branches (color)
- Foundational
- Encoder-only
- Encoder-decoder
- Decoder-only
- Mixture-of-Experts
- Multimodal
- Reasoning
Drag to pan. Scroll-wheel or pinch to zoom. Click any node for details. Hover to trace lineage back to the Transformer root. Solid lines are direct parents; dashed lines are weaker influences. Filled pills are open-source / open-weights; outlined pills are closed. Amber dot indicates a gated / invitation-only model. Dashed outline marks low-confidence placement.
Methodology
What you are looking at.
- Architectural branches. Seven primary branches: foundational, encoder_only, encoder_decoder, decoder_only, mixture_of_experts, multimodal, reasoning. A model sits on the branch its defining architectural shift lives on, not its vendor family.
- Lineage. Solid lines are direct parents. Dashed lines are weaker influences. Hover any node to see its ancestor chain back to a foundational ancestor.
- Confidence. Solid border = high confidence. Dashed border with footnote = low confidence. Amber dot = gated / invitation-only model.
- Cadence. Updated weekly. Every model has a citation; every numeric signal has a confidence level; every placement has a rule.
Companions.
- The Model Pulse — what moved on the software side
- The AI Stack Weekly — what moved this week
- The thesis the weekly defends
Operate. Publish. Teach.