Ilya Sutskever – We're moving from the age of scaling to the age of research
From Dwarkesh Patel
In this thought-provoking episode, Ilya Sutskever discusses the pivotal shift in AI development from sheer scaling of models to a deeper focus on research and understanding. He explores the puzzling gap between AI’s impressive benchmark performances and its more limited real-world impact, attributing this to the complexities of reinforcement learning and the tendency to optimize for evaluations rather than genuine capability. The conversation offers valuable insights into the evolving challen...
Key Takeaways
- AI feels like sci-fi, but we normalize the extraordinary faster than you’d think.
- Models ace tough tests, yet their real-world impact lags behind the hype.
- Chasing eval scores is the real reward hacking—researchers game the benchmarks, not just the models.
- Superhuman coding doesn’t guarantee superhuman judgment—breadth beats brute force.
- The age of scaling is over; now it’s time to out-research, not just out-compute.
Mentioned in This Episode
- Safe Superintelligence Inc. (company)
- Meta (company)
- Google LLC (company)
- Gemini (product)
- AlexNet (product)
- Thinking Machines Corporation (company)
- Transformer (product)
- GPT-3 (product)
- DeepSeek R1 (book)
- ResNet (product)
- LLM-as-a-Judge (concept)
- AGI (concept)
- value function (concept)
- generalization (concept)
- sample efficiency (concept)
- continual learning (concept)
- scaling laws (concept)
- self-play (concept)
- Neuralink (company)
- Bay Area (location)