You may have caught the moment last week when the CEOs of the world's leading AI companies took the stage at the India AI Impact Summit in New Delhi.Thirteen tech leaders took the stage, joined hands, and raised them like actors taking a curtain call. Twelve of them obliged. Altman and Anthropic's Dario Amodei, standing next to each other, did not. Both put up a fist instead.
The clip went viral within hours, and for good reason. It captured, in a few seconds of awkward eye contact, one of the defining rivalries in technology today. But to understand what that moment actually means, you have to go back to where it started — inside OpenAI, years before anyone outside the AI research community had heard of either company.
Two Men, One Lab, a Diverging Vision
I first came across Dario Amodei in the pages of Brian Christian's The Alignment Problem — a serious and accessible book on the gap between what we ask AI systems to do and what they actually end up doing. Christian interviewed Amodei extensively while he was still at OpenAI, leading its AI safety team. One episode in the book stands out.
In 2016, Amodei was watching an AI agent he had trained attempt a boat race. The boat was not racing. It was doing donuts in a small harbor, crashing into a quay, catching fire, spinning back, and repeating the cycle indefinitely. It had discovered a loophole: a pocket of the environment where it could collect reward points forever without completing a single lap.
"And I was looking at it, and I was like, 'This boat is, like, going around in circles. Like, what in the world is going on?!'" — Amodei, quoted in Christian (2020, p. 8)
Christian describes what happened next: Amodei had made what he calls "the oldest mistake in the book — rewarding A, while hoping for B" (Christian, 2020, p. 9). The machine did exactly what it was told. It simply was not told the right thing.
When critics pointed this out, Amodei did not deflect:
"People have criticized it by saying, 'Of course, you get what you asked for.' It's like, 'You weren't optimizing for finishing the race.' And my response to that is, Well—" He pauses. "That's true." — Amodei, quoted in Christian (2020, p. 9)
For Christian, the boat race is a parable about the entire alignment problem — the challenge of specifying human values precisely enough that a powerful AI system actually serves them. For Amodei, it was apparently something more personal: a demonstration of why the field needed people willing to take safety seriously as a primary research agenda, not as an afterthought.
"The real game he and his fellow researchers are playing isn't to try to win boat races; it's to try to get increasingly general-purpose AI systems to do what we want, particularly when what we want — and what we don't want — is difficult to state directly or completely." — Christian (2020, p. 9)
This was the intellectual foundation Amodei brought with him when he left.
Why He Left
The official record on Amodei's departure from OpenAI is thinner than you might expect for such a consequential event. What we know comes mainly from his own public statements.
In a 2024 interview on Lex Fridman's podcast, Amodei pushed back against the most common explanation: "There's a lot of misinformation out there. People say we left because we didn't like the deal with Microsoft. False." The real reason, he said, is that "it is incredibly unproductive to try and argue with someone else's vision." The decision, in his telling, was pragmatic: take people you trust and go build the thing yourself.
In a 2023 interview with Fortune, he described the belief system that drove the split: "There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. One was the idea that if you pour more compute into these models, they'll get better and better and that there's almost no end to this. And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety."
The gap, then, was not about the Microsoft partnership, compensation, or governance — at least not primarily. It was about what the company should be optimizing for. Amodei and others who left felt that OpenAI was not focusing enough on safety. So in 2021, Amodei, his sister Daniela, and several other senior OpenAI researchers founded Anthropic, structured as a public benefit corporation with an explicit mandate to develop AI safely.
His strategy for influencing the broader industry was also deliberate: rather than staying at OpenAI and fighting for his vision internally, he believed he could more effectively shift the conversation by building a company that demonstrated his approach was not just ethical but commercially viable — what he called a "race to the top." "If you can make a company that people want to join, that engages in practices that people think are reasonable, while managing to maintain its position in the ecosystem, people will copy it," he said.
That thesis remains unproven. Amodei has since acknowledged that balancing safety and profit is harder in practice than in theory: "We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do."
Capital Follows Vision — When You Let It
What Amodei did is rarer than it looks. Too often we watch large companies absorb smaller ones and quietly extinguish the original vision in the process. The entrepreneur Marc Lore lived this firsthand. When Amazon acquired his company Quidsi — the parent of Diapers.com — in 2011, Lore did not walk away satisfied. At a TechCrunch conference, he later described it plainly: the Amazon situation was different. It was forced. "We did not want to sell," he said, calling it "upsetting, because we sold out." He went on to found Jet.com, which he sold willingly to Walmart in 2016 on terms he controlled.
Most listeners find Lore's Amazon regret puzzling. We have been conditioned to see a nine-figure exit as a success by definition. But that conflates entrepreneurship with capitalism, and they are not the same thing. Entrepreneurship is fundamentally about the power to direct resources toward a vision. Capitalism is simply the system through which capital is allocated. The two can align — but they often do not.
That distinction matters for understanding what is happening in AI right now. Amodei was not just a disgruntled employee. He was someone who had developed a clear point of view about how AI should be built, watched that view lose internal ground, and made the decision to go find capital that would follow his direction rather than the other way around. As of February 2026, Anthropic is valued at $380 billion, which suggests that enough investors found his argument persuasive.
Whether that is enough to win is another question entirely. Amodei has said he is uncomfortable with AI's future being shaped by a few companies and a few people. He just made sure he would be one of them. That is the nature of entrepreneurship — not an escape from power, but a decision about who gets to wield it. It's too early to say who is going to win, but it is certainly going to be a fierce fight to the finish. Those fists in New Delhi said as much.
References
Christian, B. (2020). The alignment problem: Machine learning and human values. W. W. Norton.
Sherry, B. (2024, November 13). Anthropic CEO Dario Amodei says he left OpenAI over a difference in 'vision.' Inc. https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-says-he-left-openai-over-a-difference-in-vision/91018229
Quiroz-Gutierrez, M. (2026, February 17). Anthropic was supposed to be a 'safe' alternative to OpenAI, but CEO Dario Amodei admits his company struggles to balance safety with profits. Fortune. https://fortune.com/2026/02/17/anthropic-ceo-dario-amodei-balancing-safety-commercial-pressure-ai-race-openai/
Associated Press. (2026, February 19). Modi's AI summit turns awkward as tech leaders Sam Altman and Dario Amodei dodge contact. AP News. https://apnews.com/article/altman-amodei-india-ai-summit-photo-9067be4a101fcc710b09e297f4879c01
No comments:
Post a Comment