Part 1/3:
The Scaling Hypothesis and the Race to the Top
As a former vice president of research at OpenAI, Dario Amodei has a unique perspective on the evolution of the AI industry. He recounts how his time at OpenAI, particularly his interactions with Ilya Sutskever, helped solidify his belief in the "scaling hypothesis" - the idea that as AI models are scaled up, they can learn to solve increasingly complex problems, often in surprising ways.
Amodei describes how this realization, combined with a focus on AI safety and interpretability, drove much of the research direction at OpenAI during his tenure. He and his collaborators, many of whom later became co-founders of Anthropic, worked on projects like GPT-2, GPT-3, and "Reinforcement Learning from Human Feedback" in an effort to balance the power of scaling with the need for safety and transparency.
Leaving OpenAI and Starting Anthropic
[...]