Education in the age of AI should teach everything.
It's easy to fall into the trap of thinking AI handles breadth, so school should narrow. Drop the languages, skip the history, double down on STEM, add a "prompt engineering" elective. That gets it backwards.
The person who gets the most out of AI is the one with the widest education, not the deepest.
Here's why:
AI is good at anything, not everything
+ The connection space is too large to brute force
+ Humans intuit strong questions
+ Specialization is becoming management
+ Humans intuit which questions matter
+ Diverse thinking beats "best" thinking
AI is good at anything, not everything
Pick any single domain - tax law, organic synthesis, sonnet writing, kubernetes - and a frontier model will perform somewhere between a competent practitioner and an expert. Pick the intersection of two unrelated domains and the quality drops sharply. "Write me a sonnet about reconciling EBITDA add-backs in a SaaS LBO model" is technically possible and noticeably worse than either skill alone.
The breadth is real. The depth at any arbitrary intersection is not.
The connection space is too large to brute force
The relevant scale is not what one human carries. It is what the AI was trained on. English Wikipedia: 7 million articles. Pairs exceed 25 trillion. Triples exceed 60 quintillion. Frontier training sets are much larger than 7 million. No model precomputes those connections, and none can retrieve the relevant one without a prompt that names it.
[insight(a, b) for a, b in itertools.combinations(all_human_knowledge, 2)] is not a strategy. The cross-domain insight has to be requested. Someone has to suspect it might exist.
Humans intuit strong questions
Ask a vague question, get a vague answer. Ask a precise question with sharp vocabulary and explicit constraints, get something useful. The bottleneck on AI output is usually the input.
A student trained only in CS can ask CS questions well. A student also trained in history, ethics, music, and biology can ask questions that touch all of those. That is where AI output looks superhuman, because the human did the hard part.
Specialization is becoming management
The job of a senior IC in 2026 looks a lot like the job of a manager in 2016: set context, decompose the problem, evaluate the output, push back when it drifts. Computer scientists are rediscovering management theory. Huh.
Microsoft's December 2025 New Future of Work Report puts it directly: workers are "shifting from merely doing work to guiding, critiquing, and improving the work of AI." That is a job description for a manager.
Managing five AI agents is not a different skill from managing five junior engineers. It is the same skill applied at higher throughput. Management has always rewarded breadth.
Humans intuit which questions matter
Formulating a question is one skill. Choosing which question to ask at all is a different one, and harder. It is the difference between a research assistant and a principal investigator.
AI does shift the calculus. When pursuing a question takes minutes instead of months, more speculative chases become affordable, and the bar for "worth running" drops. The intuition for which speculations to run still has to come from somewhere.
That somewhere is pattern recognition across domains: something a generalist accumulates over years, and a specialist often does not.
Diverse thinking beats "best" thinking
Hong and Page (PNAS, 2004) proved that under reasonable conditions, a group of cognitively diverse problem-solvers outperforms a group of the highest-ability problem-solvers on hard problems. The intuition: diverse heuristics cover more of the search space than redundant strong ones do.
ML system designers already commit to this. Amazon's homepage runs multiple recommenders side by side - item-to-item collaborative filtering, browsing history, frequently-bought-together, and more, because no single A/B test winner algorithm beats the ensemble. The people who could engineer one optimal model build many imperfect ones instead.
A human plus several AIs is the same play. The human contributes the heuristic the AIs do not have. That contribution scales with breadth and collapses without it.
Educate the whole person. In the age of AI, the liberal arts are not a luxury - they are the training ground for a role that endures.
We homeschooled our kids on exactly the bet that breadth would compound and narrowness would not. So far, it has.
