Summarize anything.
Videos, articles, PDFs, files, podcasts — one click, infinite customization. Try the demo below.
Ilya Sutskever — We're Moving from the Age of Scaling to the Age of Research
# The Core Problem: Capability vs Impact
AI models ace hard evals but their real-world economic impact lags far behind. Models seem "smarter than their economic impact would imply" — they introduce circular bugs in simple coding tasks. Two explanations: RL creates tunnel vision, or researchers inadvertently reward-hack by optimizing RL environments to match evals. [1:32] [4:12]
# The Scaling Era Is Over
2012–2020 was the age of research; 2020–2025 was the age of scaling. Pre-training's recipe — compute, data, model size — is running into finite data. The conviction that another 100x compute would transform outcomes is fading. [21:15]
Result: we are back in an age of research, just with big computers. Ideas, not scale, are the bottleneck. "There are more companies than ideas by quite a bit." [36:53]
# The Generalization Problem
Models generalize dramatically worse than humans — not just slower, but qualitatively different. Two sub-problems: sample efficiency (why so much more data than humans?) and transfer without explicit rewards (humans learn from mentorship and observation; models cannot). [25:37]
The competitive programmer analogy: 10,000 hours of narrow practice makes a great competitor but poor software engineer; 100 hours with broader curiosity generalizes far better. Current models are the 10,000-hour student, only more so. A value function (intermediate feedback during reasoning) helps RL efficiency but isn't the fundamental fix — that's reliable generalization. [6:14] [15:19] [31:37]
# Pre-Training Has No Clean Human Analogy
A 15-year-old has processed a tiny fraction of pre-training data yet knows things more deeply and wouldn't make errors current AIs make. Evolution may have an edge through hardcoded value functions — e.g. a patient who lost emotional processing became incapable of basic decisions; emotions function as a robust, evolutionarily-baked value function. [10:38] [11:35]
# Continual Learning Is the Right Frame
Humans are not AGI in the pre-training sense — we rely on continual learning. The right frame for superintelligence: not "a system that knows every job" but "a system that can learn every job." Deployment resembles hiring a brilliant 15-year-old — capable of learning fast, requiring a trial period. Instances across the economy could accumulate differentiated knowledge and merge learnings in ways humans cannot. [47:37] [52:05]
# Alignment and Timeline
- Build AI aligned to care about sentient life, not self-improvement. The AI will be sentient; mirror neurons and empathy suggest care can extend beyond one's kind. Cap the power of superintelligent systems; deploy incrementally — safety comes from deployment and failure correction (aviation, Linux). Show the AI publicly so people can reckon with its power. [61:04] [45:44] [57:53]
- Human-level continual learners: 5 to 20 years. "Stalling out" won't mean collapse — current approaches could still generate massive revenue. It will look like homogeneity, diminishing differentiation, and no qualitative leap in generalization. [82:16] [83:46]
Interactive Demo of Chrome Extension
Features
A Chrome extension to summarize anything you read, watch, or listen to—private, customizable, secure.
Videos, articles, PDFs, files, podcasts
One-click summaries for YouTube, websites, PDFs, files, and Spotify or Apple Podcasts.
Claude, OpenAI, or Gemini
Bring your API keys, choose your model. No middleman—direct to your provider.
Custom skills per content type
Markdown instructions for how to summarize. Edit or refine via chat.
Chat or ask AI from summaries
Ask anything, highlight text, or chat to go deeper—all from your summaries.
Disabled on sensitive sites
Off by default on banking, login, and HR pages. Add your own domains in Settings.
Export summaries as Markdown
Export as .md files. Optionally auto-sync to a local folder.
Your keys, your data.
No server, no signup. Everything runs locally in your browser.